var/home/core/zuul-output/0000755000175000017500000000000015134440101014516 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134451200015464 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000317651115134451004020261 0ustar corecoreRrikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD |Vp6b}Wߟ/nm͊wqɻlOzN_ ~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40tr>PYD'vt'oI¢w}o٬owko%gQ(%t#NL֜ eh&Ƨ,RH 4*,!SD 1Ed_wkxdL3F;/u7Taqu5Ոӄp\2dd$YLYG(#?%U?hB\;ErE& SOZXHBWy|iZ~hal\t2Hgb*t--ߖ|Hp(-J C?>:zR{܃ lM6_OފߍO1nԝG?ƥF%QV5pDVHwԡ/.2h{qۀK8yUOdssdMvw`21ɻ]/ƛ"@8(PN_,_0;o_x+Vy<h\dN9:bġ7 -Pwȹl;M@n̞Qj_P\ Q]GcPN;e7Vtś98m1<:|a+.:a4nՒ,]LF0);I$>ga5"f[B[fhT/ɾgm\Sj#3hEEH*Nf äE@O0~y[쾋t=iYhșC 5ܩa!ǛfGtzz*з 55E9Fa?Zk80ݞN|:AОNo;Ⱦzu\0Ac/T%;m ~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d%1}Uhs;H?)M"뛲@.Cs*H _0:P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6,֗&̴+s~x?53!}~Z[F)RH?uvͪ _5l *7h?cF_]CNnW)F5d,0SSNK9ް4:ÒozsB<^+鄌4:B%cXhK I}!5 YM%o<>"ہ)Za@Ι}YJz{ɛr|hxY/O$Zøu32EʉD'MS1}t i:Y`cФIX0$lη˽`!i:ګPSPٔ3@5;ȕ}PkڪH9' |":", 1Ҫ8 %lg&:2JC!Mjܽ#`PJWP4Q2:IGӸۡshN+60#:mufe߿~Y,iǑ wVq*T+ w%fx6 %u̩1hӰc%AYW ZY~a_6_yWf`rVA,f=A}h&V<UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄ{g6R/wD_tՄ.F+HP'AE; J j"b~\b$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#evkVv._WiQJͮݔ\Zťz;sh4BΈ l8f(q*72"DB&&-ʈ'cC<,֒J2)p|ݛwu0{ѩ2ْM4tޖӳM\Qe%*?vQ~W  yr3-2+=Щp!k2wu_~c9'\ѻ|y7*nD4qL~`|%4Q0q["< HK'f dt(d/ZoQ%_}~Yki7}SWekk̗E\e'h􇋲rTG_77:0@Iuʙ?&Ԕ8e,žLG"1lͧQѶGM]}yxZl 0JM"d.=`Yƚ^"J?}>8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==qޕ6ql?N/e1N2i ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDf?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W/{&Ά+4"^';͗f3kZc7Bɶ6!=?8oZY|Uv`Ƌ-vo|J:9[v~\:衡O`c IcjlX):_ EeV a"҅4jB7Sۧ2t=t).aQcb^CZ-uvpr!(dvۑ^'5|XOnI-D!PltsHDwQ$zzBvS0h} -_.7޴kӔn,?W1;;|,ۇ=sxy+@{l~]`#U8V$}`pNU`ZS̯窜8L^N+m q2E¶0^ S#xR1Xj~=rs>NdMLmtїUJ8 KcMlf2ǝyWLiXCaSmMÉBgr7[ Nʇ)bAgX'f0]agB-:YSk( "vu\h1Y(f 6q%V 0"{4ȇʩ@]Y$ًF_h6~ۮ )ib+q&EXFuײJ |0M>8l WIf|\8U*hг˙r,3l'^  [}r:gGt $*}aQ.Zi~GXdm$UPI8>^;Y. ,BLq~z&0~[Xѓ dc-rtx-rei4ANb n\"XN} `v;5bLG`U:JP%m x5zb9 E"A,S V׽.D\dNyj荊EEg]bÔF˩V’"D#cuCXȉ4 Ֆ:6סK/9,rlCA_Tz1ǡ~-|@M+IE ӓK9+ceס=ӹjO%޿:8DN,FCzNzySsTvtFF޿ z#KVbZ˙#G6 `8U֕T+gˏj׾,NNǝ"] rT [087?1a@P5B,c}jcGȱ ҫ+al HOAi\fs$<;uI\XAV{ˍԦٔsŅjIL;3#i3RAٽ9| cbpcT?xYa8U ;ӋGtKWJe$ ߝ_y,*.92_?Q 6]c_:Yrx+u6*~l_BՔ!;V`JU~wȌ-zҦՅv;3O'CX}+2H`qU`{⧘pvzz0V Q_[~[' g/2*覽L|X̀`Z>=@Nل͐S>窜̀'i™:HG,?.?C3tBYpm_3`~%r~MMp!l?6o A%"\\^'[~$;n ^}V_qؿgqϢ㖟CۿsH\ow!>66mTO_x DVim_9 hxh2 Izb.E)բA l1:YɊ|ZHOge0Ώe|Bf"m\ Ru2 \ )@+q(7}* h% !< 5IDK5AhDK֠]4xMvXZhX1xҖ51Y njsa*&0Z|ђ8 CGV[4xI(#~ t>lf@ Vi`D~ڇAځQi5ʬLHuona_u` !>Z;I"9YPۊ}xb6Ia{ sckG,:\n%Ld="K-us0d/N#X.?_םh 2r-'OID3d ֺ"?yUAmEGr@!&H2D+vVyKZt<c$k#xu;ʍ-`Zi3)|x!6!%<@fpѻK2Q31pz}(.>LC,HI~'.ObKjoJdO UDp*cj|>z G` |]}4:nq!` `9AgPF_Ѫ2)sCj1T.S0Z1:?W~>egI+^bK?&#I3X\WGZ3M`AI.O6xm`Rs _Jt@U8jxɕͽf3[I3ů,IR Ř`QbmүcH&CLlvLScivG'հu7Τ!ljK=ct"*tk dRM˜6Uǖ_ iGG\ m#Tvmەv|YRiĹ zm Ɩgv{Xn|S5E O0][&8NJPq^H2Զj{RC>he1u CWL;oG^\ X5)aRߦ[_Vs? {bOe@0o2cވu~-B竟} |23 Z.`oqD>t@N _7ct0D f"!!N\-8NJ|r^*A cigst{7=xLOd[s)t|2L++XX߇ ѱ LKV:U}NU*7v-zߞ +Ōa`HDN74Т C>F}$A:XBgJWqLhnٓۓfl8fp*CDrc3k.2WM:U~{N_>!Lc/rK-vv%~ =WBX"XA:#u-9`x 92$4_>9WvT` d0ϕ_\ Bؒ G.}ΕU&4D&u.Z9c$A$Dfj-ء^o$#OȯTgرBӆI t[ 5)l>Mdc5u= A@kU#cJpրj6M>dUN/aF,M!:Y:`ƀ$56%}N'OKx%'t+NBJp`tɪez8|hUui')gXٙqUhhC%&U0SUuvͦY;c؎( "YJߗ]oc(atGS1px#S]MF˦NJPYDX%ܠꡗwhl}) 8ZFKiĴX%2@Ki+^mNf""Ŧr ~[0.,Xe TpUZ/S[*HyVIUI':ilL t,O669Ȁ,EʿkڍfC58N!j.Pٱ ҋ9W@/i´KxA|HkĘ_E6=W|$O -{$1h.A`AWO@$52F$Rv%S zp&mߡe8 "(O+:Y¦={2]I)|Өq9| 76"Q;D*04Zٚ ?V ͼrKkΤOq,*kI|0]Sju&y[V zS{X6"Xx#I/vrʑ=J4fG(Ȋ1{TT%41O]YoȲ+_K(qin \dL8d7.5T,ykY$IUuuu-ͪM;9ɀ I VA2c-E,{D (EYMڗZ ^ 3-ݥ>9w阖븖S %Mں;}z8~dOO+A9XDȵlxIcV^L&T%A _'h]5Xr.jDfY~^Ȥ}Wf2V,g"Q@&a)i5?LTŸ«ÁeCkYRkyRy#cq]FY6'Fhg 3El|ϰMX(4|4Y4Nϧ~DAu)ƞOFru#TBa-,,$i2- `<"K _l= qX[&i_b:iښ5 xL"ADgFE9iAPbTγ:BQ5. 5*s6Kk1ORoXƣ)yUeYXN*Z^f`-XֺbyIu2bZD*[-P٩elHZIlHbEП%}vӂ[eU#ٯS6%X I1QTW5_6Z,\MMkк`BpS..nH?,'Y3ZP-K 63ِL767G#Y4FGO)AdpJ B<ȠK#X啈i\f+9˓vުPGu;tlc]ĩѲFu*OYڜ7FGm!@("t&ȵFt;8;xP݀4t7+״Ӝs =.aM0\]YZټ*o˻!-Ma7YV5>EN*LxRÓ%KP|5_H 7.On?á?*D=mu%pmř&5eԄD X#)څToѾdeǁ(+B/t5WŠ 㴕 #iy;KSrָkK^;8,k*)~,?G! )yJJݳ/stXtHBYqۻyȰw$%٧5spJ\ )RDjwۓ4> 8@֮`!~c=!Ʈ@&i+r'̐]5e1g7q1ģ}V¨ԾJ]!gX%878;pWF6#:sjYXKڻSnm)nOJT֧7Zk jƃz6 ͩ\ E_?m<=iDE0ʊk)MGd lلslC%t({H؆ /LsRVXGGc,}9W"P%< MIGL>=+_[K$ޕzuLEޕ\fkUARu3SC vgU|#* p8lWe'o$Mu5޴ҚYR 0MO'QV$qZ(4פl񧱧&kJXUI2_jJĚ _c Lguɯ8ϥg'-h N\9*gNEDjk(ZBGRWZ Rv[3L8↵qL! *qFuT&c~MNȂefjQ|X}k)N>MXïdsyc=$UqdbBjs1k궮'k@"Vq5(/`\WDe]OvRFmKGRv\y-AIY[SgKMֽmP )*9ioo\+w_m' r,Ur1W@$o.VW2 AdGU]'Jh*"^VZ]X+FiX-d$o ԴlՍQgeaXRjl4e fpU!\Z Q(,'YW3*oG6\6ۦ<(6|sl80\mU2!/xQd1V\>(걯%Tk?ORbf8z4alzu6&YI|(tъo얧fYhP`|$*"k`s誼ƣݣtJѴ $ smAƺFbrNMymU'7MGu-/b> ؑ^u   ]2Ow"U5$ko_.h }U: lԔuzISl~[aJu *i\+=e-yuHx11tCE6.l ᜳ:n@ߑƸP10TɊH\2צlU4DbXƎZMN[/RhA'|gtU;8']j .o'mۊc, y?\%:>iKKywU-8n@3"+|`;a%ʱEY$?Ἆ㲏Iu{Uh[op-nj8ª[Sw˻GM=H5,9-LBGڷEM0*Wqzf_wץL%m_m)D`Nۦ=֤eV.p+G,^4L'[~.^unZ}^TU\Oy+@ށg{zN$+PvvŗHUu,fޟ(^y(#_rNK"+\8 k*3H2Hʟ7g͵lG&r-lv{ݘa!)c_y!!v$uP#2dz277$[}xcLATR"6"(L7(S$эQK9An!15PL]g03 ,9";) &67DlHp̘Zb#dx<վϘ<΁L(}Ml:=FQu@zl(ٕ'w# 3Hb?p'}5f"A &>8oP#vPCkDlĠ5@y28)N(D<dOslg&"me+88Ǒıw XXOgoOqa$*qECtz'D(cЀ|t @1ه5 ~朣jh"SFho8At8hKdSVʤmS2Xih泂JU "[v-}0㭉T6n}]s@%3>oӅ[wCĵ?T_L%"p D!{H'^|Y97V?t]{#lrdŤ }?O<50^zmtHþ/Nz/f ]v >莶{7$@ . /w9? 4M=,{N=6p@Do\AHch8wz P4;HQl`j$0`>=I3"x!'Eci#@ݗ3BA?Ih1{X߃(,?dat haW7}8rpDӣX >#9+]@ ܾļҐ0 hN6nLػxN@!; ⸳:{|+bd} PVܠ W'sDmԓϨ3rPwI)Mt}O9 .wFD@7qA n-hgEPի(#h1|cfu}˲lv]d \NUMOԹdzsɪttif뽼7ƅok6(B3=B Y 4 _fŸS0z B3_^ټ uB> b:G1m3 J'sE*N 'Pxt74%$,ڗzHv] :݇B"D#sGߓ7 AToJ|Py/h fۯ@ `CPx5'9J!H^ؓ*؃smU#.(RD)~zfQU+KCp# T]w#E,2yWZrh]pB8ї -KvKfq^qUN1$ e*K|7q3fHbNeFg핢Q KʫpvSEX]Ud0yYaƳ\m8"80E7Ddς7:˫mپi٦ˋ<;O {K,N'h-Pӂ2vCX!pu`+q#g~:aJT7IB>ḿVD 9<5HAnl&†U*}9E0'%˙j> la@yq#fUo< oPbVbI_pG\Pͯh %."I\% X)VɚA-9d\:Cwst仮7:(_BN5],ןxy2#N8= xq?o4@ `hD+.ѧ+b=>fƘ!KˬTD`HkJ*HNޠk/ok9ڊtev->:׼Pg0P,Cv .CF}}4AVvP"@n=%utkd~fbm7;QgD4^.mRϤt`ue 0zYƆV`d鈎 ƣ#v=/ hYtՓJ>قX/_M)J\'("ӼOhl\|:dnƉo%`&Te5gˊ\Z (ykM-_װZ-n#xTUn\[ .ܫUШ 5wcrRc+1WXD,Y^c }9wj@,$҇'fEZVG+~1=DPKm D^Kg|{8-宀-ZlZVy2rsEpOhgb\Fw٫D˓VKP$W^\f'x~k^s=xpyF;r7ȦE3tu8Ǐ$K(0xXv`81=v(KYSa,qt3'1kn:ÔJAԼ0popr_@mE \,,{TQ4r"?ǵ;z`O\#n F%?wz|S(V4.v^`k֟%+[6z6K-Lz$O;qF(e@`MЁۺH[F2&`l/g4}9g\6ظ4dIh ]F,(QQXas!lՍ*3 M),KiidF=") KZ6,@; jYwYgVM, 6Mhm/ Z8,ɡ%IvZn,yQ6RJK3+2f[Jϯ $(|n* ltRG^ !X}U^צTa'tg򱜕vʺj%h+ӂ-ҎRfby]gBAgj!@+R ΋m@b3`o*@Ai2/bX[~K(]%@(ݝP4B# zK.}JѩJۭt~߱;c}ݥwUjw~#)w Y%فPgwB&_9I.K<=QcSc]q4Xf,/e܍vH4Kןg>72];xdv_>΋P7ϟBk$`VuF+|8#. ̅ͪ yàj1'GXpt)l^ 7b1'TA.8ALPzhóOs~$:KK1tІtAsz C.wiRϲDLfWoust ױ{LIy s·xЋGbK2!)Gdɜb`(w" hh |#2gȴ(Z I6L{ӂI2iwDæp^}ϲ~dSӕ`,U_(R7-\euç臗u_I\7|ySPxЮ,}E52|"OT3[AᬛHc28 X [AϗJLsZ8C}BR窘]Kyېyj!ܔWSDhC_>fJ-Dj6\ւLMR15tcuTZ@t_yԹɟ7M\p=,AXD4@%vUQVARڐf BidӜjs6١DE}Sg&!5R*l }X)kP^UΛn\G[-~zdEʣ "w>xR{AgV:U-Xod#Rec"UDp'!|Xt8@I?4%;M#+,c&`DgOK+Zſk|M*i6ΠoH#{mw*s7h9a 5\l1 cqwtV;OarY48dЧgZ2u,2fk;] #9ߗ4q:hkeU'Q̩kY"gL2 eRҩ w'5#Xq*K/Ym9)I l1Aa(&,L]#s5.xz+-h. Oʓ AU- 詯#^GPi^jdP񥛤LƅaA* V)t*Z"(y\ L+3<+i9^Ǔ}eWk.x^dp+};T%%:TP\lىB*fvȻݶ$+zqK4O4rX")O$%D+ّ=Gf8O\w=~Ro^};~8/2z}|Pc+ 7ѻAꀒp-~j;s;mNo[uY+%Z`-JAmJl{i(#d{3Ͼէ=rs?!~PwjmJZ/~3>:E{_8 of0nQGF ?wud$KP!hvdfiK"%ZTg WAZכ$8 Bhn|ўw+Ɓ2;G!SֵZYM@Z>WS%Yp|-}9$y`<, vVLN[MLqwR4%(2?." 1ĂsNʮ4˘(!/ߚt^セ" zh6:ǂb3 ]}qR 9A)N:cvR$܃[sg mcxQh BU{_LL#A k߲/ 7{B۞E >A:AV*`4)u$eݲZ曓1f>Й"^^nIU]H6vSs;_Ag0+:T69U) E҉>즰hdgElѠԘF5Y<\i HoBC'.ʐ1$CFYJLk;*lֱ&bxbdLtk0+t,Zx+!\Ϩ5?0y21d/uktC} XYP֢% EiTn^X3FMI >|}æFlI0H.D8RBXUv]tK$1z/'e1кF{SrEH!%-)Ch[5LN #a+=:1]FZM&I Ԏvx)8[sN: Tc @+KɿPrֽ9aP"jB1V5x, M D98G"Nv }[05uu/}Ì"fvgbdh!ۈ頻U+LoĂğ3B|R4 UX*"I'8^ٗ̂pʒ3tբBxT-*,璾5 K^j?{hS]6S1IG>lt xP37݃ݚŷӂ"LB`2hL'[6*у Sᵅ0׬lуrj-VSv)NKh/_MI2.zXU (%E0EsI':xwrXxARV]''Cw.W5[&-yK:Ӓ2I82)Uk9rxAVH$/Htrw'P$SId)+-4)Npܔ9?۠攗 ;6"+D>ԓLĘau;}"xh1z:9UR(b/EXXvkHы9KnߘadTB!:4I'縺We5 94m}X-{l쵸Ea. 8yG{ S# N$vR$Z|YpsoEcLT;(c(c<G5P]-9RmG1R7Ƚ ʢOĐ ͪ(>Wwi o؉DJ8d@6iώTѦ_ J9R Tk1)Hy$:u:ѩI:Ɯ \T $ ;/Sܚ&coUob1@H:q}1O,ʒzJ1եq9%0= a!"H4ýfd|SBd%E)^" tF;UZ *2gQXj- /}q|IF11=WS C1!ҡ 1UHtb=~7ɼkj&ըs{>578?gl<9J97LFW5rG3 aJ1(Kj,8Nnb?wbWSf*fxQNn`VGCۛ3aT3e:goD]H:M 禰8RR'MK>3 st8-*$^ @Do qiMR%Oa)nd o7&;6(]OAm_0v:9QD;`1B BLfmBrNꨋAa<)bkG~oِ3fx[6W,X9QOmH5d)Z]YcHS$8;jp1Їc=m5fdRİ񿋚}Î4*zH1y1&\m{Hl`W?50cuu)jKgQgW~涸0#[jUE$FO&NfL|=nv{ lBQR=qt,[YBYixS֛BL[WGOb79#:-iͤG+K:HtwMeQ_-y/ opdH&$V'(YKϸprkl5 MͳZ 1\a*5VBʹ/!SֵA2,5s5UQ 'ޛ@N&?xtQp~ܕ۾|?_y'-:w TAOj zpڕ @h.kaZ(D]įG5,GR"#aWw,&ZM0D@;p캠:P$R粔s$R֜ ;Qq>PSlG32߰E.}R׏jX5Ye6FZZ(Z`SZ&)b}/s2N)_ oZڑXpײh]+ ED  "8t`/`41Oet"ݫ̛ުƇnpQC'z͂c{A~AY ^̽{M. =y.:t@`LTJH$-uNGp<# s}2U(dY~"nߜ1ffԆ$;߭*TA)Qİ5~zqs*'O׫”*fn2jwea:h>&7Mndnr۹yE9af(tb^03ߵ\ʂ?D'^\_XJ)O?äG%pkַs1]M2,ۇO/j,8}]:ͨ{;&[e.N;wM IƷ\^]{D!YL8Q\6<G}YA#±`K(:$<=:e4']Si&ɔ@0ȐSV :]S$a7`BZEe_cAMζf BV# t-VS,$M 6d.vRz2 zƗbh0gdhB} "B$a]-՗f!VcXSgqDtG&6Ƥ ZhAd8o\?4^n)mWy!kTn/]-mFTeFs[EG_S42cFBع]C8 pd?LFiFdAnR%9dJh'(vꪯM9Rz)M2'WAj ]-IeGm=Lm)%l|ӴaX?˅J \ޭb}v:Nfo.+SGX| B]2@Yvs{{Cح7~3,_ Bk;.(nq+!˞E>t@@ {.ZƂ<(2@} ^$pTjŕTU^T 1Ụ)$}5q'do]O&q$0_ 1ʎ`7'!c'%6Ǐ|rfd ZGEV;ZI*E0P8RAW;S˼7W&}FeIxeI]ٵ,,86 " Tfwk)Nz;m2,+ts A+^.p2bTm-ƋUA![|ek%u7P'K-ͮ CNUrSUgIJ,q,zWx3y&?UpzdE}mdnW`1k6~x˻Eq|{k0alҞTo|v??p@G'{cs&_tZ-fŒ]p<)#˟W E!Z7*i͂ 7nմ)u3V`Օbd)I Ku:3RX äk_8ޢob)&T~%|]+RWsosoJoAXme &YdA֠n Sh/cM[J4 1%Xp^qJ%Ͳe 5So & ^}Xnb5:CP j y BM0}umhnRdl煪vYM`JfM0d&Ң9L;xcL:uO:T*}#ܠIY XHZBࡦ3אS8b%뱂4IכOx2>J>n2򡕥ؤGWXi@bh/۵ 8ICfrѸVg_{o-PFӋX_m#$%{W=Ds<+KJ th%{1 ۼM{6,Ýzuh:,9(F|3Yx]KyKzav끻]`AF՟ބ_(n<Nj8+RF .=Ye޵԰cɻV/7qiޙ_GV8CsIR"V{uY ~q+5iN]$ũwegiJAܥޛQݯS,PƝ3R:s r L<{:Zc9rr6,S'J e&npOU4 UT\d[&{q'rK+^vK1ݣ&1'xi!vM Lׄ,()Q̽@q8\P%MoZUɊ^%{kq%)~J:"R'Rmq 9gVZy3d|aA~^xYn;[{6cpmܮQߝVr;>oUT Z?Z*ZӠ>t!n%YVS< 1 o*7;)Ix~/%n*D)ܚX@5ޕ~} vw*;s"@HLl(E;yW`|W>rKx<*hɲ[?eXWbmIl1 /]?ג@m;g[k<2ϒvD  U>59{IFU~]iZ59yx̀ I#c*Plx͗ l*ӯ|̂&*[<ڍgJs.CX2G,iŷA:k,'F(><<!w>k,z7hf=x ː\e*T-À($K0r,0q̠4a,U);}jcidږQwZQw@7hEքV۲We6U9鬢HJXb) f(Pp ^I#4k[v݁jYvnٚ }=ٚV̎w@yEJՊm*Hi3gԃ2)[hy z,ڦhm+;P-+;-Pq@S |`FK&SbS[H$MyW8dҰ 69P=NWs-ՂepU5kfk v *H%ML`L\䷑wF O䷾^gYj{sgJjԎN*|}yVpḾw5'UǓ AT_h HW1.+ۺ{93C9yq, |HD!Q2qB/?ۘ!"  \]PO?1-1ulJnpI聪N2^l d=>~skL&XG^Fyl8{#ͣΫ% cNdgt-2knr|̳ZXӜ ^}[6c ɷ?NZ7duه6SKZf6u26F qb$Ϥ t9i$QC@j]ZzRE 4o w4G[Z۴ҭUH $G•" NkRBҐ53fD*2m$RؖHJnϞ-e%>hLmj &2NDfX=(f *ZfhADE+!,4>&ܣ6XAC>.>5>}׽H㡷Ƕ{Z/B_׃ JUjxzx܎1Lwfanh l8qhf#;/n?m/B%xgލ~qgo_~TZG ৹ڻw;ZU#)Ps Fyhۍ۞-Ab$>z?p{`g8$Toʗn1ȁ/ئVT$I9t%!܎N?)Nw3U~ttއlwŰuBl:z=Nxt[Q!>]ukb}ݣ,\A)~x6 '`x3xaHTzNѩ՜ RAw7t ;bXkrI^J|̽oG":8s;q4v"I@k& 5{wA|^k)cml"&Hog6,ID`/4 #0eK8Y n2­ /1"VǷqP|Y+۸9Mem֊Uj)n\Jα) *L9N{(f\ VFmj9ܳw3k6|`@!&gǃ8ތj8`*,fXMUaU0քOf×h l#A38l!3V!M!if$0m\"$6Ǎnp޽b $a;8`m#`iƙ[d֌4Vk%a``]U,VH[0,rF:a<c#n#>_yKfe`L9%$A&{+"$x@&͜bFvJZ3r8Èk9u!KNtw4rsn`g0˙v+$XV(A2?YdfV21B98P8杏f9X U8m)HMA*Ks]ƄT% %Tp nExb0e9̙YN@I'?##Ǹ{oX>V E ܞ۶w3Ijq[-L` ͐vi8ɭU*Wa59@c"pvHZ}k3ũ,_mT)\ބLi6f |*6sjȗ 5sZl: a YfLgnxB0V&, `5'adXgFj5BR̼pa#{#đF$ '`ҁ`NǤIJ҆X<6hNH4' {@nשlvV(49z&?"̍V_jF.c|0I=D g'I=YN˩QRywgq)qţehX$LBzuow*AI1s$9=2Cdj%-l+ɜ+L0Ya ,췑4#*Re̬Qg#o(rHo'Hl e3EpLq(g]KQaR%by&ULA%b-OE qDloeWMcy4@zGujZ*۱ Rad .ʭ3!WUW1_!\\#e<5kDR촴AMh}QրZ€^|P `eΒ?QKzL Ax7`0$z71EG;)D x%#|ss_ c):DJ?TwN`๟%L)AOD~j!tMݱ4aiZP2JDzeY(xû\*KT*ݛ^NFC{]{>YV5zៃtBI|v9ݬ5az'(ҟj+g+e[dQ]Y"tu'|1(hi$,i6q䏑 z/u-!(p9Dh㗐j)^n7fR}QLL;"{!1~HX `Z{(d{&ªNeMoN$N.٧%K.GfA_"}xFh$4Wmcup7}g'sgO໕:+ W%lK8k&l_z058O9ꅅ  ꏆW5tCbrGEI@ 6}kJ % K2Vm4| 2N\r0?vzWl3Z%O#2)Ý4$Xv\3!&jN(} /NJң. 01@G&XJ„ɣ7+\ 1+\ANŬp-K Y,B>n^;pxMi7WtOQ6#wԌ\q(ŎG҈8>B2w`X`$$ L2Ԝ ?uJCi8MCrIiuɎ {gcSMh`SӂJN1LHOcM)Vـod#og[>&|,9}{5>ߖdo rz|ҟM&%,COjMu&A "9'a/s)ٶΣ;FTɄz< Q/:܈LAYP!f2m!&ba@CS*kIRZ`V Z ;W#o` Ŭx|F%fe z={qBkEl4~1U3;Çʍ+3zvF{hס* gͦ@bi ]1bZ۠L~a}?vY< Xs^} P!)SKJՓQ $2-/'Mtg?gd(T1p_'->|Wʓ6_|i/PR|W/ZX-Ҽy3>`uX j%eݤ_\KKEGD߯_#-ŠO& #UWw;7-}b: \ 2YlR.q %G֨_WK>iɠOШҚ-ԫv)\; @ކ[B$[' [bnըWʯ֭'cփFb\j7@GS-kSb6cQ_}*I^.E2tX!ypct,Up+d>ER 2٤yo9#")cWё 8'6Z<:Rt24u[}Ae4<90.ssV6/@e>^O\i=.?/Vh ulNw/^@'g!2va}~e29O`kh?)YΦiV`LYn|5N`fujoƷ@`Ԁꅓ=l4~慔YKm:C>AuCxXn,>^K25 GQ˫IR&/Ұx$kz/IaKL2_2{ C6D惣 _+ֆk+f(lbZqQX\n{B+~_GF@K&D<aVuiR|^^¯+i . PdZ2%j*/ fTr!~Py ,>X^MJfqQMTW/+8vp  a`ՆXߎA=%4Er yI* +4ێe=:G)4K"0d+MUⱌ!-Tv~[/=Sc"fܣMB[ y`v c¿٭v޽t5*\C.9En|#-go|8Ç˄:xe>tG˪uxo#_ZyB~ ˇǧfn,7Wnr QHlp| z vQ L1{75" ,@.A߽pUՅm=)F'+˨Ÿgx.@e#+3O~JHc]pp[< Sm4ZQ)tO @oß"Ub/_WTrSݎҩ2`1L,;n;ۚQyLd96.mEW^H =P@3\;Wx( Wh@&ޓ9zŴTEJ0RS3[QW q@]DdH1SxGVH197,5YS7P b1>+/C(Zԥf aZw5D]|lc]cJ"rn-o$!r$7Z!)mN, :V,aX 2uzҖeFQʽ:q4rc`?V@o#FՀ;'!Brg`8*ļ$#jr3:)d)fz#G_3y  ^ea TרTҨJn %JUTE#4d038l  څPAk1S N639`i0m?ѐܚ}q<'_Tbz7 MTjcLK% ɋ.F#8(`b\щLl^8|%{=$~w"?Cﵝf{ﲎ|vT|ɷ98>u}}r2q˱wmQFh m5κWAv}[7{fo쭛unfo쭛w쭛= n޺[7{fo~:!43%\`ȜpC%.k> |(g31RD;EpI {S$+eZfx 4 àVAWl̠4Wa SSީ1OZ2bkg5 NGb *CggI@ S lb?{~q;Ȝ9olN|̹a9;G.y7XY)I M,5g@8JsKi\+}qS2(xg3X?k9y1Eq{GgB)ƅP> sE(eW$E-FC0/7PQϡB.yk_C {a}$%|'HvȟqXӊʹ]AԺ|"8m>Ja. qdzq/[(!PF{~k,W+,\XoRPS[]Z/%gyss{?R>/k*YivSyeooV`S?,~3j{s{_ٝGnſc>7{V}== c"D˒ʤ"'sqaxloMNj? |?/q##o}G-Z3iʑ"N:Js2A>-]TK}1tp O4ݺ:'6@Jf[أ>91A-ȇ\Az_n-Xx֜f8`[?`_rbz+f\K8r&ϒ}'pG66{&˛0NE!YbDQ`GGNnԩLXbԁ|s7pk}o $`u b!&Uj. rc>g_eDs.q7/ u R$D:IBc.E?J-mPi;7ّ꾔zs0$+ V3$\BfA6sD9|ȕS5hԬZ-v}׏Ůag'14f2%`땔 4sD9|5:;]I$ܯn^[Z#™g XM憑v qJ/}|tǷ>jLX"O3P"@#HO2g1]7}`#*T$ԣ~̫U6Ɓ]h=}BuN[teXnhfHGG7" m~nέD*ǭ"*OSO5 nk5PJ52Fgof׷MYҌ.w J)#V&uLC߃h1ZőYsŪYl =<ŜGxT6&XA?.dBF6XBpo"[,PK#s5mg |x]sIJg Kޓ~80qNȹG_0>3%_PAv^>듉rfgf"8Ѝ _G(E܅"_yam>z^vWE,%V&`*45JWሱp3Q?Ų$%]>2DEd/$C<s y g$Pv |~FE@#)3s"5D %t 1|tGL9h>HEZs0%ԑ_5+I^0>3%_P͛<@xfC% NB@/\8ONČ3|MU_6fAm"*W A,*t( vh|tta_v`pAI 2@\ln!֕9 qt!x~("#Bx|/r蘣f"FכÛ- i⊂)pG|f}Hm/yLBi'yȇTjbFQ:% Q MxC[/?CgI:]Uy5 혣g.GyC*L[1>\J[FeM-˗ROvqWd͜ q-HQFSmbD-9~BnOȇٺ,?rBϑ#vަm9 cJW͖jTXKp[~a^ػ͆ j G>T|TnG"ѝuy o{Q>7A^pF݁7[s$6=ʐaOW(|) S0sQ5ҁbܶd6$4EUYeYq<;GE*A;{$.sT\L@18'x ,9-MQ7s4GMȇ|"MwN׀nOZ>"cSr *=M(apZ\^-}Y}BBH]c++vBWUxd!W)][tQw/[㕋1!m^}qNW\G]Q6"64`!-ypr}[h>]`qr6i!7E\4A]ld^MMSLWT[c OU7*-3R=笄ރ+"f:z'jpqCkx5(O\oʩVtG\$:/slNc}nޚf%\Yn옣G1fro%aQg& /-uߘKYP 3S,L؎9*+!j5k\džM,|,:AiiL=Ef)OT18z<%P)ܴc޳YXFWuZQ?+l@)U_ Q "ڌhKrX]3E>PvBG#gewQenG گ෻\|BQ՜v]c\ P`V55-K J, PcF?Czby=/Ug߯c݀n7Ǥ+rYfșiC`5wƤcF |GЫT˺BՆNN)x=GQ9j||7m{ZmݒZƆ(㰳a⦨֧/6=T+p^HЫm7RUςŨ[pfJW^"Tǎ9*.)و^k8$VtQqtkR=zuQq!F $mfIry\7=uC@XM~%y09?1r"*pWamY֖Q̜S( It}Dγ*on>R{W;U$hxsCS|iS$(a(׼FNwTFatT攂VEv>ebMͽXxYbVBYmVBmWB.DݥW}-DVHd-O8z8<PqCbudTʘ`Q_x@s]a &Lޤ+%gF.|1W=jwQqi&cDtFwQq Tlmmޕzpk)PNذBA]tQ1moy^!˖*;{kD7;[F}a)m"*m;݉`Gy;czg9w%M0b+d`_^;Ȓ,,)PT7g$dwό;, {H0?,m6ٳE,!- Z׿4\8Lz3㼒`~Dž%*,E!R)lkDpv"wB )pէ{O]-l#< LAƝ&*Рh šs ,r3gqbՄZHfs$J\tMۼC&$MGixϪ9(:?akPp!9|dlؚޕ{ e*/*i\mz/ìopfJueKy>[?lMU⽅ݜ/&8ov-}69}Y',zvfhSqwךZs-ٖOie8_Hn }=>樰Qc^J> 8 iI MSqdمO {TVAҐG ^5j. "lH{xO*],?KWan4"ڨJd,Rh<x ;E19sRnԌ14 /xS=ZXI13NZ~3̐tM^*;iqIa/{G5{;Æ*sGM*P9 2,)qV3At;ffXLWٻNOXrvK(E6W!X+zIC5;7&xAV}m8 Ov{$Hwk2n"lV,_~ s\]Js2}>7a\!Q JֶDZf%σ]EFڸ$χ>~ "W&zA=&  qmW)6}|PRG%0Pez xȸ4[i 5,zVYr<%W26hA萏 XH7HR4K7CuB1s%(߽[SA% ƯJT2jRoa४)#e-8OOjN|.d81=!zΦ{q<D僴,AYT  Zkt ~ tZ<>] gx\ .a>@9 mj sfh̓izP :*7¡Fo:s_#u>:HO(0R6)"oG2JpTyφ)R/?eQpTE b>RB󼈙l 'G}c-<~]&yjoFj<_"v-a_laXF"5 j.Pa p~2ܠV.[d*!ͬZcP_o_G~Zٰ%b/v!kX ?.iԝ뷺. x0)uO{ȯH6Y\'c60{{>Ǐ $+L&I)Sa+bL*D1r%\z_78UC5ئ~mޓH=\TӢ^uH)x-<~ ]O!ap5,vbN*44LP-dpH $_fQL/]L4zOBzL$tV۪CE1# , H{[kʺcãOOӅ v؇"+ !zOЫE$p% ?Rh2 fP.Y,-UZeX\KP\/K^u(Y8~twGnR 8qfW<|ᇯ&.ȍlj4Fy S#?0r?0F˗y,ht0+??<փ.]|uISR0t|0uo\Ao;~Z+Q0eq@O.~3{޻eGǬba.>mܰ~tz>^jLaĿ0tjfaZ?$ˇգN^]Ԡy@l_bglZd7opۃ !nqUO@ߞfhz&MqӸN* U?Z}R`Dk ~;DAPDi >xQPSGF CD E2Jx2 Њ0#.t}]@$cy3oAPk ~;A-XOc-1܌khxʃW.yt| QMo,wi w.ac |w;f~*ֻʳop~k Lv0`^g-!}Ъ'Oh?+f?i<‰'_Fhh"޾ ĭMHؚ2kE|8FW" 3}z*>S֬-Cf=!"lA,Q Q1ҙdK (8grǟ>A5~1z JIy9+#S hM y jKt}<7?\WɞRo 2#98WDB> VG͍ 1:OWp4qZ{̯!nuSn !ntC?xݐgqy-, aK_?ܻȯES&;<qoϘEU%N1Je‘-hYIˠL *cTv[{οۜm!"n90ab 月>"ͭp\9Pِ x(ߞ^1Yr~={?F~b?fE4gF핊>WJ~Xېwïܯ: nQ_éӌЄJLt^N`Mcs؍h!vЂ[OY<FDjٷǼ%?Nr}e]`\*0UHxq ^i !_KmQ\T7h6=#5u'ۣoBjQ=Av&aVT8ojѰ5urh4AT}1ʀ{ 8 e:o:;|oU@RBH)qV.[ؚٚb14 /f+v>Ca %F sֈ<'!cE0\VGq=V,X!nC&*gA篬 ]բ V"e4/Fd_Ƿk ~;A8A[ւ , yX805uKQ,)bvq'9*W6 >kߣkmr}r1x1=E!l#Lʄvt"ɓ5ˣ䔕wsUA㬌Le E9hjkv)UMw8 DaĩBSL@ygUOd oZUS'Ϡl 4Beأ3_&X|u#qHY AWAQ8dfM a*)g1h+B•՚zw^wpj=NԀHeуʇY'1b[1yRjR("i:=NL&1TT$fKJuw!fY`%LA8j &L } hbfvdJMToSPw(qE:O_QA e͎ANTe;/\aY#LX6V;\w9BF'B  F. :>43B^[B5=ma466|U LT2 %)o{Jr|샂 ܖñlڂ|6yMxȤ_rT*nJI_1lBL|K-*/4 @ⱧʠVr|ѾVU,FM඿ !m6"&=*h㝉{l=K^tDeԱyWھ{O48QݽBL͎݁xJWxm-I4Jkō/2kW ~ėRyg|}G~n<B \Ocg:)@phM\l QT8K>n}5urWB4A'TDUxĸv|4!QJH*+/1EP %G\` {RIU ¨vVa qaTĻү_7x·%9(] @T|KO=K?”?Z(ѿocOm뾞ѢJ".ˋq$bK \!8)8tֲ}eҢ$%)7w)w5:jDD)妷wj;ѪZmo}Tar 0nGouÛMunݬT͢KbAbk9}X&R&_='TLgo:qdd;a~r5Mu-?9sZy~([hT$ 2 IHFΛ|Ez[uREҘlS͐3'^w:LsL8]uR">OUWUU(;Sܡ-iW }z.Ǧ/Z3?l<˄.p R1 ww60 DͭEу{E~js/@r-ZNWM 3/BG[M SQVXw(On*7nww`lQB(8@VzEsXffN&ٛ9FPt\3Y_V/[ݢleߩ-_UԂl_nV/GW5l# mٕXuo0yBK J$0jr"?BE;P"g[$oyD[T+ A$*TOđߣOB||moId̹+'9)ic>~=ct Jռh2g's*rђ2zIb(zϑLuN21+! ׍˖jnS韗U7(fopth3z8n>i|ǜI2vepPg|p8~ewΤ.l3ɠeLq%2@\K֕1&j\TBĺ,cYzXΧܣLwU^:{m̠kjݩɥr|[%Ll͢1|5鍚 )sGo5 t$ۘCN s-Q6:ly~y죽ȳ?~f-UtS}Y\j ?3fˢ5 ~:{ۘC*9Wu ̘$屎("~EzCk``,GmQ5}l)J <0Vel$j%aޠYbKCߦaO;sHW'Puǚ @g?YqxZ[ks-$Qz\C+,)cpVBץʚBz;Gkc`t9vs\ADFjvQzvo3 ژ$$&A\ԑ#%W<#|޲{, h큞hG9}vw~Ґ\j\A{}uiۂj }Wd\jHT[y-z >o f6|c4((&L8ܽT{d"dR1,Y |mrnE'';K f6[Qx67QhEg>mE=ldqTR|+(#Wk#Ԗ#VhϬΘ+8`FNS]Y{*%]Ͼ;#ek!Z`#Ԉig~f0| Y}lmQi2Vx@oK ) ԍda/)X?v[jhsT!* 06fr6ks\f,'#}f/$@3&Yjr @xg2jI@{0Qįe~,ތطmXOlf ;=L/)c-%VƒBxToV7+ uaJvs,VA._׬d(ۯw+BdZp˲+R#PwѮ#Wjan!79UBdo<ޢί7jQ%AuJg07ٻ,*6ST4K_D)b"8Kh{WhY8j\H3GKF9jug 뮭G䌝0C.>Uq*5o<_}ȓ; *W>*[)k>md;Ja&Jqa,.ng8 *aNLUG<N|^?Eqn~kgM<TkIۚ57_Ӂ]qoW د[^jgܿuu~^U)-uZ?-_noosKX`:ӁG3MbEn'N$%Iaʯ}ݣaά,1YĀjw\JfxfO>mMA=;4ք9.IO]"g:55Р!:9A:D`$ K D.Ώ1e3{|S2[ Oq'0Bkv[DVEqjXvIs[E:k5ms*g"YӜu;y߸\у~uss\LGNjma'p[q%< B"[ƒp/c40vn -Bpn )L~2mq>Q-'\G }[F.d { U[[ )x-ؖZ*iU2qaJA4 JVe^7sS_.%$s0e FR1mvX b|})VPQF?ARv;'}g) tvm̐BaI5#\ NU@Z35 N 8I"#?*26e,`j4>qn+ew}(mQOIw&g;5Io gA9 4J[&RT$RZg4 ¬TbOͰ1"kM-[G:;och˘hXzY"LpG}2u;Mf,[4srN5`ݽsL;>tes.Cw7*Tlԏ?q-HR1Fx-HG >go팘$=gw n`tJ&Ƽ|@m{k030ȸ/oݱ:f~Dכ6OL ~L^p/Ğo*>1AGgD [$oyD[,+ A$*T ם)ҡ7FM(;#y O9cj⸋S=< VԆD'%OC fy[s⏗F@(Jv331XG#˭-}Ĺ\Covʇ'&1cy 0eQMI  (<ǚg)Jpj\T4ER;NKAiǷ;00| ) WFge&!"b@Eaj[TC9ɥ#jmf3ܙPqpcۭ7>'">Ou%T:a\Se bTnP=G!@ XLل*QLM7[U"6jbjߋ%6 piN&.Ii=i_%8JO`t^)S]i0N#>8ZavwcM@FT}\YЁK"cqbڙ˭鰂l<Β_|brK{k!Bk8cb86YIo&BΣCbȻ?L<3HյU*R-h<#8&&ᱽ#ǽ'&q9wz`ǥ'&㑢,pswM>☚8&5ڮ3!ULL1)eNNuH٬ 9_/u@91sqGKIHPHp%p[icb⌐XwyĢRXbL_U^ 8#j ,է%!Nv"$3mVtVQ&vrx1% dIN bht {WRǔ4%}EC90tp3|!bAdrFנGv݉0Bd#T OL5]t_|brX:EHb; 97"Բ!/ 4,lA% N)D0̀.NCn|[NEbcꦏ8(?O~abœ8~I0$;1(P GpLM7^F B)9_8􎖅˪gyz#OK_v;D2ԠL=%z GpLLu˪_姦x|67;tR*h=_5 j%SQSl8R,ƌ/I#hg8v/Z($yG9 &Hu^oqt:n祃u/2_QicG0|qږ`s#dcZU'&*8$PDrAVGHPDoÄ^s8 nEItIZ&ݧHq:Uvى`̠.CCFzZ"ZћE_RS]o7W! x^;k&m먕%gWo\J\nZqppֆ5W0 *`YOg>+62m3s"%*O̮'j!+RR3DZK$7XbDtDϬxa2\n6(-@V'kH`<%F .?w*Pe62U0T")WȨ|]avxZY%AL)/,pSGsP.3%ڬh̜'ʜ-:T&P)TWU9 $6X98qA YӤD"T 웣'r=Fk\{p&F4^o.~j3sD Oj~'5(䮅FQm}Ƕ]İeJ)ϯ;, { {3!UP ,reLAj`~|=@9Q_-4zfN^g]#MY)2OPL̦\yZ= JWaڂ1Q7"}1-K!+cHb "X^ZYTVrߐ: )5l38~<ͪy:$S Ntk)]dLb [53yF eEgήAdEDO $ru鞹b. }KxoP]eP5A1C'q1(;0#F^}XHbဘ`cZeŐ=3G@8,tH߽PʵY9z }], >B&itn4^CdP2BYUop!(+kݞ$%P9UJq Ox`0`CFHtDw2lFhIтPIhۨ Y W)'=\q5CVʐ\E$2JL'YW!!-4zfN{e]QF!i3s"/.{7ݫ~ݞC(ep`l[h̜ >P08:6A=3'B!/]kR9, fC:-4zfG+9,C:m o3s4=9h ff"E%~NC3=yRqib[bMJ39+=3GĊ H]A͆=PMF0sfσ'yA7Zh̜w;$i6xŸ~e;C_c1EU_̒[J/1D;rä.6 WQF"|\ˏ剸L\bH`bXi &2an^&LFֻw$x{Rϼv[FAgVEBs0_/:$ Bg7YAi9cۀάhf&uNB9h5cƉ:NUX?(zpeQ A&fefj$C\ᶅfSQ`WKgӉh1!P!<($F!8U;g C1C1fPLr/tvυa{ŋOU ~#9- Ch[lL^}=oc~|,x⋨3ҝr1$U{[";pi8Y S&CSIXGEB-4zfQCSPΝ{DhK+ շ@lQ7]^c5}힙уq󺟡Ӏ):& h inr;/F̡eH'Z |Z2)ql {reS#vTfjهzоid2A`OocoM9OL2uSwZ6|%:Ϋ ͝ଯrltSZ{TLh2:O&bh&J)S-R6 c2m{A'a;K?XMƌsq5)atnrK񺂩y1Xouz]T=f2].OOmf?`5o$ʭ/erSfJi~aeԁ6Fͷ7QBBԁxkc,a;A.)"PC5r/w?Lx ^;vBz>/s4ܸAcOu2idCyO!w)/IIymof_}z)JetJšY/KwiJIfn'eWӲoΣXYtF2h"!MD4⾦k$ͱs+OvهkBuYIb SǽpwGOxCƊ*"=z*yVmv1B7dzx\s3wo&7wCFɋ&xJ}%#DoDYRW3v2ox78AbDЈP+!ZFΌޏ!+ħ/=I󓳅)ᙏiDk,qSۂeD*@c" H)s r6Ϋ3t d?FȹrLa?:A?5]Oj7/<4YRe2u:?,1H,-LpEfVN#-$̙!$N@b@JgG89]nXZ[l{O <䛗'l/{9?;y ?AUlXƢI`bp^Ǟz}N3z{סsK}\Cf܎pBcػ(,a&8CxJ0k=WuHwӈQFHȫKm60&ć:̯#W4~9_,zG??A+e7-W_OwL 8̎k7G[s g>ӹjIeLA3r핺/o(땿Zv5YySx&.ʹy=iz}'.I6_v҅*o6Z^8w Wo6=U~;oxYؗK'jtm$g1:릍Wwrܨ\j[S[3(?cR`XFCfN23@`؜V KOݙo^nO[Ys#{lm+O׿]׏W}}K)sd;6~vm˻$ۿ&Z~t:?8˹qtv=mNh}c|ŮIz)‹Y\W% ڤ{\XOˡ-|~s˴6űX7Nj Yƈ4xjٳ;ɕ煉%c՚/UYq ߕj^-1|ՏƼr9F}'aj&Z[:;Zs3x*ERt'?۾}+&F;h|~Y0GZ2Ꝃ3:?Ǩ 'U+Ǧu|Hwj?*F6U͓0z =d[oLYzSJ {?S46bF[ m=./_2'DZкa(a(7i7J}p':zn7ٻfNMK{}QD5A`+x‚G‚\ް`1ۡ0h >xW\c$_cB)79YTpy E  qR*/H\"D O@xWоp'ZptZ%<pyx@ '#`S ޑdJހI~Acrm 3/ 'Z.PHXj3;iˬ5]Z4‚S}Bq܁V}nye}yh7>3ۚ$LOP(PO ި4\$]U^P'(ޝKS_.0C'T #gOqr;s}~Yqr)H 1 DI\O:J\izRJU○bKSXZ&],p8%b *þ~fNIa6Ӂ`j-`P"妋vɿbZCV*d/>ox,ii?Cp;]v/+xTP-od끢8W}l2qej*СeD9R ;L/K|>ܨ{nBߗmfb`j&t6]#òG,c0?朵Z-Fr3З$3TRu jm9 1#$`RbUd[,.s6;s#ʑkcڧb+8΁RLpb7܊9ܝ;HwS|Pl2?6?6 ]>Iw+)53ߓqK|$,-w_v\>[_m^NT(33j,lH!VpH 8$3H+$e^\sNk,|X9R&a͔}}@2J%Rjd޵q#_Ɔ#Wur ܭOӚ4RfFI߯=/ɚ閆zX8"_xa[):TKyJ N. 88NaqU51NRL3%=`:PZmuYX-[YO:MUt.Vdh 6@uFz;Uhi⫍=%#Έ$)'4{O)* H1lF<أXN5N*ֺ*ZTI俈#"`ʈ|ԕ}Y`F{-{.F~۹nMnC[9#TӁd|MQQ"%!LGJ'O-ںunZxT5_ET*b2j:0Xg*z scDUNJk0wu2-W.a]e^itU$WI/${X0{fFwvA ЩT) "(MI.r+D ͳ(.i^偦i)jzQg#h;^lKQ@&)f=24rhdIN)'5 Ҡ=UCzpC^xS1hҕU*M>sesn%z}y%ΛvVY\z3svC+9X}k;`1yvga|9l~NϱYdS͔#45-!#k3pGDC8cy0LGxQI#;3wi4服AϖGA7Xquf X z mg yE'?6X\Ssa[W>N۫O.NJkROg-pSDcda"뙘_.Nx :f "{<':6a/'_m^FW/sfOI7̬v#UL'.mAjpODzRxkO\ٶnnn6=m?wON?-zs|6ȭRץz*tm$\t6;>>,6n「HsU%szGt⧮Po9Ƿǯ߼?&Tװ}#0Bw^^@3uwûФ=]36z1C%E .qWKnk@}h]zϺ|6^Us܄WV~DVV gyvlGv,X m2-0_t,N ׳˷XKNG𙡚"y- v !JQPBEʌ|+F4U5jc5VQ1˒#N&"&_(a%S\2]+Lm0%(#`w " 5x2U-b۩ҿ8ۭt.7.J^+]zKt.t.ZJKTΕ.ҥWJ^+]zK@*HA>9=Z"Ȝ$cqk$;bSsUkJ\=*EjMU 1\l+z$ERw,{\ڵ׭8/qH)+" 0ֈs,"s ZDhV>ne)Kq̹he^`Ub8 &R%)Hj-f/susX3R[#qC x͈(gқEҦᷳpUi+ۄE[NJ QS?:_̳̐=qSO'$ JXDTk9( vcMD#u]ܔS I$PƄpeo'2$9bOɷD Dki$uEJ[)4Ahrs`!rvpJ0K~z%Xw&̬^.dx.O M"o~̀4i?Z3mUr ,+ D-;e5(~?6ʮƭ:ePx)qLj˅$v6\O+ 7bbK\jw8-g0_M:z}O0b4}Bo8:>OwƻFgwmTnFѷl=wn)?r*ins?ʥ 53{Л/GPiY-\qmnYMޠvMok/c\$#gbm4uٙz(;6ِ2nj͸+Lfi5Dh|3oWNًfC7=?#o[v8xqzMKPAJ tAZm-\'->^is>ė.FXwUJiǷŇ>N*bkH%dqq-#̈'ƪ0z?]w( Ĉ.c]ыjH'mD\͂M6Ѯ;.0!qxw%077(|  O 9 51de2޺bП5~Rhz/Ȳi Yn99=WߒnY{ wz>2nPJ!˕=xw'﮶V%%L׶OqB&`b) E7X v7Tfz> ц$.#3#%_r "YKPVPyz?_a!ݸm}an+M7|egrq.N[C& `236Ӣ~wrpm򹍇~~%j˚ppɿ,~lyON繜Nf]A!tb\N y#]~2< 歝5`Wngao%I%p-(ѕ^{{5'TAI}lޯ&g*UeK]H;k]}) s܁n%2/Ndd wŃMOׯu,4 eB !eO^-nrM`H rK oldXuT aCD2=3G-kI 堬t8:oa]Zx'4 aLb>nN<|1؀~@ %+-i!1'@yY->),eaz"qa@Āv'BU=6?"ona"$ a]`ĶXvCZ( KB4DL^ 9NcZ(!<\;(b @gF)89!-la91CVFF@'L"!}LpICZ(!<R2Lfuy(ڄ4B 9^)IA],2>#QdYR7"k 'V& ,T>ϛ OJ/6wJ @$0rO9cV*C-^cV +w%ܕpWJ] TJ\ +w%ܕpW…KXJB FYxri,/A--ȅJJ| XR q- r|,qٞ!-^ U\xIV`, Ak!@G-[BYň(&&W@}UIcBzB wܣ l>j!&,5(BfHQ(4㥜!@VJ |QKUQD f(:<"♃LŞ!-HU\x`%c(lHd"IpvHK0@_2cEzKi#%w?[;wgF(dN 7(O)w ]RLv,7VC i= "N5qrBtAs$o ֹ"s85/u cǭΦ.β`R4ؿqzByutG-\5UwTݴIģQIp;cBi6MOA@ C\U;Mb V'WV^݌// Tyg.ݗ {v1Kb0Ko>}D&L鲎 'c, *m*y-h.{mYUKTgLQPQ>E f: K_ա7#(Fvh13ϻ*Q2Ǥ`r ǟp::{sz>={2sv㳓p G`<)WׅIA0ҿߟ$HHs#vnghC4r t+Kn@o׽doݷyb\Ԁ*pM.+mTy0m/k2D/ @44FyTUu&Ʋ~HGh s"F@eF蹦wu<3.ʬdF"yϨLHF>-uRdJ49Z1 jዘ+aOmT7Woc6zc2b-HpdeZ;$6c$d-aB6iүPd[Ϭ#)1\i2u.M+Nm[Ϣ]e)X;e )9kub{"Q#e&s8×BT;'s's@wۧ&L5}T di( \7T b~`>@myOdDJ/mȧGnjM#e`oZ=N$ N)5v"@FRihNꘄQ"oa$Bĩ JKX J'!Iݧ&K$6mQQVDy]&yu>-4`;J!CH%h('A:Qq`aNY>e2:JA1|̉H+٧&W4J1Ҝ!@ tI\mdr٧&D6=!,#[* mVQz4\K*UBưKi AjyРS4VT-)g&:O M0Oqo#֔4S!rfbqWro Y.;ߣ&5iB( coK AV>-4<>ƙNYHPV:ӸiW}Zhy+(S߬ C~iO2!QOޒk(;yV7\Uz@>P, ;黎%!"UyDR.)Q<a U|B].45\o̳֟ (#cP {H|be%B(V\}u}ݣ(2!y˕TXq뾂2yZ5˕z\Ez)3%!ö\ r I8|R?/9 ^L̇wQe7kbϝp>Qw>V;X&eD9{cXZ-ELe֬1̚x̓f8& P7@ :-QfM!TG`ߚ00e=KåbsKJEKik?< 9/Ѳ&^ m)9:!JFu)&nq^{q|(EY:kĮ]k+_gkym!-vHkE]Ws6 >7I"!2&IJ{ S֢CN(}H$J8tPѹXw9Gdos;'gV;s7㆜|[U@Qy3{5b5}8/ {DFs[EcIew53*]Kȋ Q҉`ULH|g%IU ^u!~]NMp=8q.]A$IJg}UI$% i>)$L+Mxb@'Ӻ}nZv2]4o\7nA#G `5'|=SjOpl^plm 86"搖Vm6ۀcpl-:##R6B6ۀcplm 80@*eݡ+׿} eмRQ4V^1)eDg8A)B0b*hN ӁK YɌuh Өj 1 Ds|'KmԚ3LI\"jO8KNPX^ƺ sޯͫ>^(38V~TXa>9Czۿ{cU6dvh1TR˭N(7FqD)QQu%CL6SxTf_e!dXV ]qgo}4&ZJiݚϧ$j0kL BPCXjYg쪬u=y?ľD\kG[q/S>za֚w&!g1^F) )A$55s"!1j|$&#%hs˺\[/, % *9F(ɢ" ƺj#_ p4>"^ћ$csqA℈$a(IhS LDo :$8լg[ݸ4%hB2MH8CJ.8q jEgۈ3!D;4ewy uμD Njl.ytMH 3:9KEdy+w*rt;='ysB!XRI]P1B`@"14^ǵ8wRlu5zOк6niVL~u2)V ]m,ܵ/t-t[=۽Zg`-ΠD@F&Dʡ?ģJVV)e%BW&J93w[vm{Av#G8AihimR: } QDy~$Q괢c9\[[qu.WXVw{?GU0*K-ThUJ8'S jX0JAN[XT H}ޗl7&l{'5fwcjdY]!՟_a\5rZ\?Do՗ٝW8*PQIqG(|`drl Z^?N@b͐̾.P@vL6uH\+)7vFc/;зQ@Jz!+>5/u@DI>nq녳gY`PTE\yᶸr$RQ=wˇuǗW+TA G Iڿ44]l&5ibr{v1U«e`3 3̥~b2.vIT 7*o>}D{&L鲎 'c, *mp(U|8zٻ썷17*#۩ΘZ˫5l$f: K_!7}3b.ڋ*A(zzjuC UJ+%˂g:盚/O@VrOG߮?VxFl/w%IWc#k|_bw[M]\ࢣ.]ӵ_=v:| [ (>zx1{틽q+%TްW8od8M"3jQzvR?9JfK16C3LyP&XIgrۯoȻE84W,VT_=QW _NvM(r55vu!EΫ?\͉|+o7|-Igpξsᩩ`viꆦn6Q*8N/{E~P a\U1q3@E %721g5߭:z-|mf:s Q~ȕm-gCqĚ-oc7Ѷt3E[U.1k_uTxզb}">&AN~C2qupa\Ⱥ*-QঠJΚT|q:+gI;^GKm c+! حj&ԹZrSmr)f̀M\{7 jlM:oOwƵh)EL$?'{j%{+{L茤DmR;:ƈa4BqU7*)$(@h%螤:ώ_߾eCUl`od %SR(U-d(%T {nϛ4e7-[rs#Ԭ[=mjxIl1@1)ɇ5i7к9Mtr38:AhkیU) 3!9a,ƓΚFR-+ڐwr26z=򰡥FnYB Q|n` `֖Jb=w|[vtF|Y.MVN|"0` O9@EE; 0N#4/ h;oPR6J2Tb7%.A2&/VA-#12q%YgeV\hZh莬Nã[ 3ڳR#u˶ E@P` HJƬG6lmx>P6Pc.roPPCo+69#ZwA)9x X5lGD&0VN ddR!}Ey$>pIsc)ε`ΫmO J+Y1v|э!ՠPw^KCp2P(SP|"(_ cbQ,F^$W $$ڗ Oʚ9fdYk|B H\4l=xg0>VqC hA}p00e+ĥj*٠4xZb̠٪.( / 0gZ*KP4dYm jmlAhqU +0똆'z$;f .' ) >@(E&rZ#d^S1P>KÙ.M>xxLPBhb2$kȧUnx$ ;< #W?U]UBNu~d}:MOU;:@vTVB]+!Hb!ҧa|~4\z9(P"QPG݅ZR@aF 5|3 EY+5Qv5+ڱ!," !PVvܬe$g^ZF!/D(DxR:W3+Ip&XT&*b,zP>qVH blUN,UhE yt֖Esv4|a d52 Z?zkJB iorC6e`3Vs` ֛F|L/`zK@Ӄee}zL[jK\dg1A&spff=+[{ XQM ~;Z,zw:5WZsLڌQg5rFCk bLhp gid< <%2`O~@rXr* P.(7FhoCqPD;D; w[zP낂Ёdj*g':RC}C7ob1 s+Ikj]Є:H' 9)I*[0jH c,1@Z*(.FR<z[ϬsP` 4B)1QFǤz54 hcVܢhZHk֬UVm(5o]_l:Lj='е-9V#;G&ձZi.!@6(-< VAP4kMyQ6.4(s!z~Z)`DAO8IZ{m(=mGe(u4qCOu[nJCi1[.ՂʠtiL  kȎEЬm{*>Cl.Xvk5c4&TާsE}#W~`AAY =GWkqٖ[˲jm]AjɅnl[Gw#ﺰPK.m˕zZmܸWky?{nCGa2g2 $h@(u'3N 沏 Έ):8qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8z@ 83bQn %N $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN 1rw n@w ;r"N'Q@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NKN >Իbig@@kܣw'Гt!a+9H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 @/Gqi}0?,8䥦|׫}{qfAokt~Q`!iKPq8m?zPS0.xNng prBW֘NW ]=Ieo)78 ?,qɳ %?F/+ỾCuׅr+_(,~|- (k)g چo#}Ish34pMf;M3J M{m+&;b *"])fC ;s8MPz-t . #ծM~o4JF{tA]pjgẝbr"urzs] OLWw>ЦODN(cSWt儮Gt;CW BWNWrkR ?]M +@[rhk0@KgG'{/_>zu_YKv. *Q+Ko^&Z˔ -{YJ{qx?9ljH==X5ϛ^8vlq|p8%צ-%L:u^V-+h~H[S4q?dZȴXKVu#7iK>gU{Gt;x}pƁRVIVQZx崉Shq:loz{}}Ë[Q{}'f?fn"Vxg{/[\ζL ?W߃^q_7 $,nzc;{NGcvv<fw-|2Vͳ#ڟՠ鱝_}ޫX5*:i͕h7 hm祟:D@mA1<@orrRy1+[EEbX 'a ldĎ񮫝f')ȥZrZ^<:p'޾K`Oit6BrAq}voݽ*v hd|(oeͅ4Na1nc؁`ӏN[}*:r56JhmV[v [=qGMWeUeGwщ9 Ujayh$2.z!]q9LFYjM>{7km(˻>06x[ 6zX_9Dߙ|tLr(D V.Ex}TDԇ*TLxE]n }ը/ݬGm_Q>Eԥ:luF"&]އU]SB//&hBxiAximO$^K}z_խ(`{_)m>Ec{4JM'A#;tYYY؛}*ڂB_@FC[4&/!dzx<45?'.`v"f+1Qf&#) 3eL8*4f1/Ýo&1Ub<ː1p|@=H(giA_8ӆ-?#V(\;%9tH㐶ݵS (`kslfvQ3ϭS٘<[lYojr{r;/F.F~2-pv~BfӻG.%Rp ~CvsJ/׺v|xΦ+F4,]uJn6J^k^ j"18iTj9*:\H>t$.A)9 I\ffXPqXH`"x<9/BCcPopf`pY{@x{$hx `ex xꨈNeƽ! iz*ӴD z|`JIcCǀD+$"JۉTDʹz CRe# 'y".Er3,Zy4pv̼z)؇ QO#gq)43]\UC%OyD`d & hw˷kK*0/U[7VK^25C#'SbPXF"5Ivyb5G2f%81 0E3FRPeO :[V3l,B,Zƥ 7ː}~77QKjď;)D'afۼbWp;wR,CB4ް)rQgMiˑ?N܂/hT)_'+m?C|*1 E\;? ۢo^me(?G+;w߀&UCWWSU0ir|9 =]ٿj\qVZ zE#պZ$z{?9+>ߏ&w"ltq'yP(TNZ߂:~p滛o.|?|Û߽|zs7o^,Wb+6>Y_N@UI'V -Z#RgO&owzsgoKO J_'?|>tv?݌(41AD`xĆi~W=yT߯Iخ!fZv~LtmM8먮M? >R sIʝ$j0r`nXR-W2xq8PYCM֮6v4zM,BbXZo֟}de/eb)"z d {Z2w FXl0çν^#ݫ^Yl+^Cx?\pha,_\J6EwL)! =;>P0*tJTɁM{e_  ?&O#=#=D o IuvЖ}zXuLȤc:M:-hwOdx$Gp)BQǘL;<0g:4‚S n+Y; Tne6Q>IJ|šD~|NA:I6tr}6:|7}n4QM+ [5E?`Gh[nɣhWu#ڑ+,LR|۟g{"]7H_H%,9cXFfsJwHܖꝅq!x4w֬^[E0°/ߝ=iXǙ1T)-b<yfZ;0=lPia<1ZVw";Erpx]Oa3.Z}KT3ZUi)Y(unJ2(ݍW?jR>pFSpUʼiFIFD殥RP)ih^'m-;|cUܡ~ ]t6ֿ9Mj%@7&Uɦd ~YRY1&ڶdLMc5g"(1,aSn*%`gJ/<=L|F;lI\\EZ"kmZ4P!fÂdB&%V!yZSv!$16N:T3C N;kOkg+2 Ӆ&=4((33j$ ANAjʑ`NfǤB=K9ig%͔a3N$BBrgʂq 1/0ɈvZ'{70}w#a=2\̂e36nHː*%yQ24[kR=ڤD9ǰ`$ q\­$(S8Eۄ 9;bC s7}'Uniܼ~8 @Ѣ\RŽYmxyó0rH0|dXH4vFJğD(ѩgSJ(PʍxYbYᘜe1ԞH! u&nJBD\ڤNA3;`#zcMo`!; l[s&C9 V>}QhS[E%ZUg]mr69¾)]+a;?Ir%Tm˴ĴTI@`Mic6Eqk(-ח(J)Q 77]t*i—WQJZBҚ*IW 1pe)p*JYWbϡ 3삫VpuZ3e8LJra$p%Z:v/WQ`W .G)p%*Jp WWWT Í(.o \EitRjC\b  JҪ+R!5•ФWj(um3WtAT7򠍢<0%8!?ɯnN&Y)wDQ\ Đ4LQ}t9RՔQ`Q\- G)i_#L n'=0qO+yp|p%1IN0m \Eqn \EiW %G-FRXh!W KUc*KDS*JK/~.Jk+5WM (h(xg0Jx W_ \=^v!ա?\"0Ea;< i֮˂+y\zLg 1э(.M(-xR"ҤApE5@\PS }a5&n \Eq4${FJ#8`qwi.dvLI $5nKZr,_[-%%JnɪWEv1;oJ4W S1WW1W,~v\J\Jh"Q.SG# Hwo7G F˪DK$IJ cyqz.'>~DH<&3fm큉 аl{E5Ĭ4}o_@Tcn.̀|6C7c9(,;wupO})8W2:?gF vwokScj71ЪU~tJnxyVC4Mz>iҩNTBͫx LqY̙C:<Ncʠ6F&mS1hXčW"w=`&w9`.9죌:cg-ء@rFϵ,wJÃ& !r^{ٔP%#?7G#U|_\@+u?7~`JG 3\%*lp0W?H\%b=\7*;ϮBs#+>0ߧXq\\͕fiG S?{4.#{|[Rv~?JqǮpesu/lyЪ-{dL얹R0W`:S+L}1W -nJ~0W?$#s%{cBd_UB+w탹zsEeZj#io߻˾e0f/_z^<3^9|gc$9h\u&c+ >Ikz{3̹v6D#o]ћsb_g7Phl3 ~ Ӆ,B,wxwJ0,Q.qeޚaGl`/\B̑m}3ewF.gt)7-Z]KuŒ.`P\o4@7,)(6 ^idyYq|)zFE~{3b31R+$u 7^V&r3o‘q7-B3;5wހkkƽbAC)n'0u\B?_q6 yl&m"W>WBz_+V?6+TKx@!f[j澕Vw>G|>o$Ll WpXr#RkqהW-Rn}+ObFjԠRm&5Q-ɀɩhd諗fgk㷾㼒YÇ`6nueՇ.ۖٗ4v*ϕ49z]с9x5^ځv}VI\r;BSz~˔haF@Lr.:7/A҅-O 5*Cz uBΕeNF]8ၱK1sɁqxڄm?NZ#0u-{kLn G!:"ᒧ]y##E3ֆ`9DbWiΖ\!W7AZ(BWhA &a7X 0`l/<+˙̸p_ћr[#P25iÎ,s$>en 0Y-P"/aN(Esfͭ$G~Ze@6D(jJğ1c,!f1pBDH@lDKZDB)7j%BkY˺ue 9h.i;Sa )TsVU!&=08ŶM 7$.3R$Vq$\@Kela,%VTKQ25 D4V[hP>X*˾;.NoGD%i]Ss$S 65EJLptltl\ZUǿk]{ީ)N[ Mе!ul+o[  S4^ Ϣ-X@(r4ɁO۫!Bh$:$:Zk280y";vr~5޸_ N:key &(]%^P&3)p\0c#z)qn. UPG*cuFZY"v Hc4RNru8eH`֓@=$9ߊ BdJic$ 1"b aD+$ "A"Hܘibi)@}"fXIH 0}ѧq6MS>V;?W7^NƧ Mb L~,z'uRF(o'j1߼s$Z[k:_W Y_^_MSXށT z0bryo|SX[+A7:kZ׶U+P2\3Is#!,>٠ uCd^o`>>~O|O/_tǟfAK\Azcm@mתۻWݐ fUSnZ#f~zUM^S z+gSn@~NW^=Sj'-8-fM{p%s}`gMPt҆s_E>Vvz~P-@UgeV寋 N+qk2?$Aރ{$QUǁy "Ю@=M3R#uj!ц+'7:.*$i2 gi91r(KA@77y5A=p{:,a<n(k_mo5X 4; Vkkz5;?0iFe^uؿK}:Rs,H}9R=md#+px:vˆm\|̖3LE9FO`/iKFx<ar4VM)ffn>*!| g{5VU1<֖)wlRk uڌ#xWBJP~2B8;op)WV$97O ?>&2vǴOcvlb:t2Բb̾ܚݻ_':f |]} 29 s\aܽ/]ěUǿy{L;_hzI2'?}K'Suc5KMs1\aY+^fqDhnYaڠW< <ъGAAQL÷.FNSPyk{ɿcgkzG~꿷[qaaW*R[e0EJIH-dEl\DI VJ(A"{*O 4ROc uH|-c ZnO2ZC `>R}h=$aM^3Q%6r@w{ ȑZoXFxh^$M;8ќ(mART@d =!"=A k/Aer8%b"XZYh6t\/:q:9z5=kjgqiVyqu_o7KwyW+Vq,"6B6#*=`x|T3`9p&͘]$^mʳ۾=۲2(+}o]K8߂os=tk ;܋i՗^:>dO # s+cY@a;>V82;gAד\FNjjFے2< W7ћ\ٲ̳v]ԁX,Q` )`)$$|j~Ss<Lp9MPkCt0H\^cUwz[y/g?THWrW nR(n`Mk0Vo}uߧC7+ێ}<މjcFG'BG i 3&f1$(R`K Y3$:cA"~lHďP1l ,!KEH"yI=`ts6hIBӝ/g?$B^هvx1z2!ktd rߣ= n39^qi!CegpObWU'=|w31)R?dSz_ @#8l]aU rQL +K d,嬴kkYw*A CeN >HL <g$tuzWEYgO$@a}QwΠOezr%OU f8*&*qޞnfvn_2s4Ydf>$ 3݃&{*d={WoR䋩57π, #z!@۹T!G"`"RSFDD b FрG!eLD3Wa:+?vksQgo~d[4-4*kc&PfTetۨQ1,%! j@C-U\F_ۺ5];]x*r8eZRܪ]?)#3K+B `8`B (IIeS֧˂_yDӳXpGᬧZK|J5믫e ǑJvHD3Č`X+TdRb->O}!; u(joFI*S6^ELcӱLJ!"'^q<v0эg) a2o5{FutV2fǩk:_>nw%}j6PT[-5 EEn] IރAyVHJZdD𐰎r0 &S&kZ(K)µhkYDzrZ< YRIR89S9ũZU!&=8祐0ymR6!ԥ#ܱ#2\* XEd"|l(6DM7{,|*_-W(f.9N5"FTP,HT!)<;0)"C%*\YHRazȏ_"rrXpx؄qà|x@W|v`l2 b?eE'Bq,1N|TرX0taE:F *jx|z=|w0N_B LLryUB($<3@caWL#,88JMd޵|0CؖmO+: $<7yR0M/x0%lO  D.uv7/˸XKkws;e.qsif0tCڰX{Ʌ\mfʞdZX N؝vehpM}^㴬x*ZRݠH\ۃA2-|>[& w\]|/'_{ǁ=-X~s!,liI(J1,,VJ\ь]>ZaPG Vp).}Ԗx|r[̀:k1YZJZc:khkW=l?6s$įPhe4ͳj*0G J^B:`^e.DBl8Yd+{EY1 3z r[ B# t%X+10BB23uS/IN Y$53"C,iYҲ)i-a=E/*j7ה>𱃗~p8]opSv|2}SJ/n;E,oe<1y;Ij} m[y+K?fmU:iU p j85DJ7ȼ[Wo;Fb=1ր5S\D/_X@L"HLwń6nen }ŕvg=TO#eq{j!=!ZmB/i!8k/|nr봵3aFM댯Ȼᇏa\eM0a0{Lwd>CW:]uC _ ;fF }P:CתOg~kf]:C3A3:!=y@фng_算>ã(yhLq"RZ~٪+{(0N RX?kϪV~pK\yj؃oVb4f^w:lm_mLq)WMkڷV4̱"+^hQwph'v){{>ToêUrbHUO$Vo~]э[)RdA#t$ mER,9zEi z|E$) t.>M.) ?bX ЙVΗܺ/uxlv6U[1z-;g_ >"{C[kVg9NMإ{␂ TA^-E}DP|HrK{ }95]wM 9G=<_K&v^uE!#90̡pݞI`׎52!Bku׹{\;-F ryZf+Aiv~h>Aq vSpwL?ykENj:Rj f( M0t^W a.c1F: lp6HF_j Ty E%Kf4covWБۊhzvk}5OT3*-AΥBo\,0Q, 790ҷˁDղQ2E@̌|]x@~@Ce@KT60q24$=E/*4~Hs|<j n2 s~~`?`O`{_ Ri8l[mSC/]:R?L`2h)m>l9H%,NZtoV' ޕqd?Yź«x&&83f9H(EIJl^vۀ$v7]WޫW_W8\(q<',Uz*D'|]pooT /tGq7+_VwRXPݬd4-ݍK YiK<[ɵpc0̇ 6x":^{K٨%ҋ(Ѣ>Kc1ce\d _RMp% [Pcz1oy1Β;z# p\\'T pG'ETl\H!P*:bax bHc"6oM¿/7B,~׎.r%Z8LkRw|5+,|KCs/gs(FwE?J)=⽂e \r~2ZrȽN r1R*hFRqDR98㠈x\InK$vbFűK_z^7&w` \c5%"x-Z ȉD4\jrǮvbm(i_E#Tۇa<^lZ7-tmhQ#0r 7pľz@%vZ7Qx\Az]iS3 *lŏ.m6i"o0Ow0C^ܪvQrg.d;/A%[l?tPpmMLiPRpwtb?Шkg[v%ȉˆ쓳J&_gFϮbM77aW ZϮCDxf`y$ `Bs(AEgԚpL6cD۹ω!56]`|:њؤ 9XcIPրF||9k8TR)NzAL{RaAPI RRl=1ȡa9$^6^Ԗi|kBP q`:-RLp zOpkё"OgF=4HF@Rz3v)ga*/ݚ7j|4[j<$GA8 I""(p8Ml.թ a`A,LLLjRFXpp ȴO|iS ^mңʟise~+VUPHPkiPлC2zh-m`== p.Y S6O>NiO`7x;+þҍdAC[>+,]-3g;j[U899 C0! )[X1c2bDk45[!-vZ0K(s;\T?nv:8 I-Q7/ qf%*Ű[)r]q;k6yۘsZ+1PvV%چduoCKəkǼYݙ %Z{R>RL@SNq飶ěU Y\C/][J|DfiinzHbfdyHδ}򟜢뀤j$'ZK(Iz yI"R:" zE.qVq1 A0 ('ւc}9@AӅ=YbeHƴ (QHLd$&A:łKsg H\hBZ-{I%}JYKb^*ju[KGR3[;~4g]Ѻ_.[(w/?>'HQ54`ΦvTNԫaQJWaK>os}AC"9B:7.}>Eމ]S-=/;;"%;Ko'cUUg&r1} p2n6U'_^`g7[i X`T9F$㑅H>H DLǝH0<Tu_w)DƩfh[v"~ݜxq8oЬRgy{Kfz<,w/+$^0'Ku/-r58`Bq`X[' zuav)4gԮan <vLW=mnt Ϭr喖{MJn(Ս(|U+}{]aF^31S{Be>kFTӛ[F m>~foZ3d6zpC̒ RY3~;ZL"ӒMTyXQm&;BN諚Q|_@PԴq؟vp GAS λ @sgJxyI''E!ι[J# V^?M~H9\!r -%& }IKV1n/h#w~ſ=X-r$%֛TF_e$ŊHGy@hN6 ) N8p6cЉO4 _P;_r^8Cc);_'F@H52X32yk, y&X:O^x; } J6EPo|p mbs$3FYMA҉70IG;b\D/˙ ,yj4nnN*HQHO9)Rn&x92J#P͢`;X+\8D hddmzI =-5Q<8x9s`\>=<$XŢ9px&n|{;%\IX•0e7Ɠ~5^|=s^H}] Rdˡqh}3eVJB^ ۔VɗC+vhMnUzAAhHn9SERe9,[uk#rgMJ6D[SfUL aW޽ | >kjH|`jD{Ǽ鳦Ȫn s4nh)hIc%ȓϚ.)+74;8c5h܃_}44 mZ8kqy`&Aw"UgMIu@8[Ӵ9cl_Vrtb!Wx(>kn 4 uKfgL e`*؄Ǽ<hhR%hls4u&βh}ZbPZh|{ t8e*N #_>=˸˳=sd%EyBh$:$ך2 N1LC*ф[Xakp;^1 )'c@:JjX$POE:'r*-{? OF7M2MR.P"SJ 0:I<bEIBR ptnar;7|}b"{sJ~ Y5T](?/W(b%G2L+pR "ax/ 3[{p17:տ@0!,0.Cp]I]׉hx7_ݙBOdTWhnмcR:yvz j_$=؉`>uFk]蝪Qw+n}.\5يp\5 GfC9T/涱HƣgE|roz*۶nH{7&a4.`dǣz>ٺӢWZd[Y4e&ZFRKMOC6YR|qOjMxoecQy;sû_:~swoϷ_y{߽ؿh;; @2OՏwꮩbֈtՄoӯ[]!>toD^éKWq)3.&x_arJO.FǥQ(m0 Q]ڈH;>(M#p_߶&&%6[M祫㓭xY8I`fq@nXPؕVpOŌTHuۯ6|yy!> T^.*$FgY5vNd+pSQHdS@sطts:G heFgYS! 5_)v[,tWWJS"WLMKZ61,I apC`oo2 n7d&X`7snvV=qU7)-Ҷ< D6ͪx7pI;qfmORFK,͔"{M]Cr}RNgsK8s䇌'ӛb)G؀cdeVk're 4XUdjeP`*7Up%|@> &3h5磵q$$G9Ϯkm/hr8n1_웂E sKrF`w=h˒*XA=+jV2ᝪ3ȤgR-b&gRl x|ք-E%XL6J zeR+g{I1lnM!yF7\rNÑbgdN9!cDYNcU5zjybp p6ʺ!ChrquzDb3t^d;]’VYGOd ~ndz> /~_4`9 ˴HR4eKRçLw&!&,?I$bBesz`Ho ;- s1Q[m(ђFM?18Mu)LemΉv acj ~YA`g<#zSeO I??l4q*[#'I 4y}=\瀊>\<0ӭȿ|WG=Z] b0(YF D0JFH΢^|{4zNz u4Bܫ A̓Spo -޶-H^bkd@GPJռzť;p1P8ކgfpZ H5/^LBmڮY̆=ug pcr3OµBiHԍ{j}VI\2k+CLkLF.eE6QsݦWEƋf~x6;$!7 UX{lȈ2qP>Gic;lrk45[!-Më gxa"#jTȣ邔[Jߺ_åZ_Q"C38uw=b!6j"K\.:z-Uuhww_͞/}w2m]\i&юwwܳs;C+xtO΀=\/Njm;cZbuߗR>\yMo]=Fꉖ?Y0|X=p]YUJՖR]q:vanŚLqj%p2Fvz硿}JK>X?'lJ=ܪXw^Jʹ܃vڴrl] [ oDj:H9M+tufm5Z$HmT݋ v)= (|?\\E%|(%r\&i(X93E'S3tve"al`Нh;Q%^HS1Y\wm"tgjiF ?SHF4ҧF*;o<gĠ\dy4AXe$Cxm{k,)+3cr9Mf> ?zۘ/$ Y"{V@P h-S, c݊jF\`zbe|?pfT?2 g=CIxL͉>qGu 2UN\}iG~~wXh|K9N5*ՆԻ F N=WIHwB!RF\L*?-"S*BYS֒ij2VcUc%C^9.=S@)`A,, W&ecR# [_@^>8) 9 cf? 䠒- +KM)=#p˄N5po5pz-n=pµvgdْckq_5&%w_b :<{f"{@ ɾIGQAosПΒQ\)%iGcpјbZ&)3žBSR-)[a`0 H`R(*1ZryJ&waL[X[م.SHiUL:m\|ZS[F<9J % x9(#g)np1S!ZnuNlƈ^FZ#چv[qKq^!bvJ:%zPi'9άy2\%giv^fxenRxWWg-F;Zg_yGꭽJ/xC+WKh>pa*8q{Ȇlv9hB?ߖ{&yiuയ&-(.=3Rtğ*ZAYRk˴5ghrڌ٥Jw ,F43)Wi?(}:&t]-{zhr#_c>gk?6 %1Q Ѭ^j%njj"n1 T}w&EfsQ>' qD6e}1QvY!ٛl\9ts&G RX}twsx]M1/$K6WãUQVM!niY]e-кX!u{A+d=j|}C>s\M\@>~%ݹ\HT6(+x?U^\|\;Ur!GMo[̰?-u'I KqCCU2͊9+p{ArN%Ro>oM G$j݅.#Xt<[J4B0[qywЧ0\֘" ,bӛaVyL/1: d{pPKy\BO೽¼R'gѻ˱*V8vmbomMpeamɾAξ3rL0?wBGU{4JM;+hJq"߃*h4No^ɜ#?[Nͨl3 饾 van|갦)Hjj%p2FM?\fA.ηffNX!psQ` ҜrI1XJe9 藈Tl!krMIF4kkɽ``za$ch2g8T! "u>tVI+vR񜈜H0 J` N!3-d\A=tc]p0E.P)8R BK<9@\rp.QaZ"1d4r AJ1Ű@J3#23戣ᒐ&%24\I~| qYNV`ϻRY (LZAWqk4L!&3rds5ن;p$Ld\`東1!X+L:n8 aHy4#&@߀(acD?mIZ&w ᐶ^b;7JT>5r/GjάƇ"~(<4c!(TIٖr˺6R*\$SIaQs8&rj-!4d3f(q a1"aHc#F gU KvTd2@ cswjgaHӤF/$ԭR.l }[G??93{ދW}ZC?^ϢUOnہ a.; rw#=H%q(wnA75nag3=`87Nn`@W˛R/YE0Jv%H˧>.!) .ly3W}Bm%,t \ssͬ$gF8b(>kza=Ja:So`h𱕣RgNsXe|@_-Y8:Knu2>),|0Hf>!\_"]SZ6&$G)8/hr8VM? L-;vա Ű]ymZ,V]kw,%,J^]w]63L)_8bƣ5Fl0Ng?%_U/%?FH_9i]<.dkԾ'n6`㎿rh4  tO B󠬊WڬB/7q߹*ρ .m&Qy\Үb" K,bewIU3X%}h,gرa`1 !O#IJnb}#%IE KbU23++/"##KMAADL ` Xy$RigI?kr}=F͙ vvE>JyE9๻u.}E F^ϧP|lȧݛf9YM7G7XH>BRR( ,s$#MMR-,3|7P}#%Kݭ-g-h 3Y9<< tχq˗×9n9sg2sɲn9f)^)f'b!mWj'MzJm6 BôϘdCڶ xl &LZ)wLRnJU|[F)4*̖[ǑDq< zJeVt9_0c=OTkfT֛{f>I+J[3ˆ{R:g{Fklq:6~;-{L"L8"Ǿy\3P.M;B9e&:iKWrzS4j(o E1oC|/pʰkr:Da<hr;e߽Aorpc r'-VGO3)kҭ"jO,ati$q{0]$h>Gu?fه]#Y@p26cNLGlQаYV9ϸjj?xXfLw]]a4IF!zv%EΖѦS:_XF:mWGCŅs`?8ͤ2Py&3Ɵ[ݮC$R}dy- ={P.+:~j,"lR/(TV8.1IVYh <u}*fi+`yTXFǤ"9̰4N+!H&F<=s½ -1+?4nCvHc4RNǀru8\1I9-SV5jچtR2 4P*BH+gC)-`L"چ-#5Hu ~ֽCRY@){@aE-"DkVA#r`\>B!,BN\TָH:Q077tC>.,wߚ/CRT<4y\7y:?+;?Tgc{}_\CKgɐLqJΰC 9SDٕbxnml<x{:+U@0!@[\:`r Nn=wYVLyNAŗ}h|eM$?7sgr!_S_.ϻۻ Đ8W޹E+ɣQ|ij71 ܃F3UK_: T͓>殺ߊ;O ld8fj:[7uTvI ?OlP@~GFv$Wt[7 F3O,i} s+V1iX-x6f7YpZԎJQY"3&PHu8ˬ܏Cg;Ilt St*&7(nw\/ .ӏ/.%&..?4_`&y೺)H~ij5x2.~~9Jqh*yhHO].6&fCglMUn@~|7viWJEv~҆LkΊ:w@'w|6Ji$_ftܭE"_Fx*GA׊ UjRnaM^$J޹D fVq0AE%]ɛ {(fG3ކJe9|;FDꬣy l|f4j@I9#&0)V`0/RTFo7u L >z=M,8n'*Se6ʐXXel-[݉Obozgߵ0Y@ |7oRqV6[+-t$gyw\-p+"Kp8g:L]ʍ"VFudYqGPK>N62B"ydrB)vfMq6{?n1HΨD[˧pB %4 2l_]դ* L2lP&qxo^z1H0BtD%4FFg# $r.F/1*u-.٢DΒ@z#d5vS"K͟+\=N(y#m|=۔mʱEGBGRpzwD>a]_79-2͘A6ƒ EiCkA"ڐ$bB9<2XBH"yIts6hIBhoޝOg3$ZB^x>x{kqoV-9*}-~ekALkX*,U톮KM}ûIgYaln>4ɵS+*Yew&Eln pbrU:+Z7Xni6Ƶv>yָ*Us-h8\_uwsE nfCe:&icLWrdәfUmWٶD_kߏ6<->oa,pjjr2-ed. 3#@xH-u>??~(*ؤ&?W>KуK?=2a% s|? ;MpO?4[J~g[K;͒0g&Q:689Ft[d Cwr .$CSgBHв颅Z߶0D<]*2 BeC)fucn u;zjEUw;wm~4\SEN@L{3rj<Gөkÿwx2ټиkXLͶLFyׄC{07`cM38˴8Q8F֡:߂Cq%',qxr8'$aR/H;Ljd<)V2""&ZRH8H7愜-8Jn{(n~TYG8uf͛_PWWloĝoQ7xiԋs pDB`kHqXXȂR@xd5! bL <* s]`FʩR =L)tަEk5hb:JwM%i./3TF%Ysh-JlS\-&x#32xs3sMhŷZڒMW2gqSLh~g72Hhd0'ZK(Ir %pHHLBDN&dʁV˾!\H(eOEof9AQN& =rc4{Djy7cZadPg($^2#Yo",iP%LR#1s)d q˶rZi䴓=0E]O©{b`  ]J(U3WXORn2 0ɾzM_`Y2J͘"nj M0ֆ/EB~S|Sء[Y6FY15dUq~ttsvSe"!-$'#`.%|>#" +C" 9B\qkEBu-zM%hM4;GIgUK s6{r1} g4(`ӏAgp7muҮɊ|=[]/Kv$USs驸oofN|Q ,R#I_HR&8:MY,pXCWY1E*$D>!ER")wcK3fWSrd=CMP gRQ1xΒj=kCo)ȲgNe8ڂmnxp\6,QD;XR/ 3RLsyƝ^^)fS%Ϻ:ݳbuJJ+։r./.L Szիe'/zВ6ltr4o/dKm6.t#FMYCrK!=\}VU w]k(npDpbqq~ڱ1Žq}c&6{XK\X4[aVðMNy'欔q5Mtu%/}{r\wX>s]4Xg!@'z,Gn˦ڋʋBQnawRअ+.eS)qHQJfQ!:Z--,xRD$B9hڙٳ8%&|\?ʽOVopйxMQ9?Fk?z޿?xS.x{g~OOC*qig=ԢY|}ثY`>ĉҫО* |-995e'T#SQB9a'BI)'v|~8)=a?T58'Y=rPO8w߽>opN>?WLUDP"$S^{]ͤ'vM6wmkߦ_+7B}]3{gKn@x緃ng7VM/3zzanϔKR!(*&Dy#f<#܀JCy'!;ZhM<,u=r'[ N t@-K4$%3]9{iqE[n]mxh|=ځU!*%J; v$V2F|e!S6'O5QF>cOک3> ծRֽ/;ʍKK/V.i/K>'_@;σy^dd|%(o ~ցKxSM}^yon%^1^ GOX(ع? s*ET$q&8?ݓEmNc"$n⊇zQJ& e/==ZREpAY-]ޜUHosbB>tікbNsj9y6,LxP7"6z?UO-^%Ŵ5۫13`\3zQj졟[vfNOz8{D \@jכ{W'qH$x(&~j! %wH,8 Q 5"Vt_D Ȏ0UF<) ",?22 R-|)+_F_o<"&mvD Ǚ-$%LOшA@)?O;|󘎶>$4f낢LQ9,Dx kRA2)E$@n2= -{ϻRdAh_;PnMy1OiSۗ,;xQ B L΢wG@y$fԊWh'93"&OZň\t0IB81 B3⩣9'JgʔT1bklƈh }Hh rjߚ*">72aan zNN jΘY}%}|ڕB\LHGʊ c.(\ԶphC] }1tJILh"BIPcZES YK*Yقm xb] fn̬WK7$EQ(q/6vűCG{BG$Oqa8.?ZDA Ȃi :V;A@"\H<7$@j!8!c,rZn1/0ONZ:c93.Ji $zWZ5> ^;'# W꟢&-:Uѻ.@T8p>:Q(ݾcxgY/9MFSĂ^PsPC 6hlɩ1LAk77Mhc"1M\.m;6乷^Sl65?mmf:m`wVavϐQ[{no'\3}Oku;tpm4HBE L /@ (AI w]՗xλ~B퉕T|1rף៷GZvòB AJv &N &dHf!sHu mm" 率w;*2rzt+/9dW"7OUxG/.=Q䴳FFCiˣ6r:᭶&H]Ҟ-U1%_ic|g85Օ; .A;]x5C)ۓSf8.S})6\o`I(bdia)Ҝ|d4jO y,iKۍ/X9+h*on\IG9"7&@%sR'P8 1Q' xʉiQƁ>LZA=#`A -&f8e{/Җv"GNmIL@$#{25(Fd@nWN}trH)g-VU2,7p>d6u9G "}$V(Z:Z1'L Rs{tLQ@y^Gdpm"gVa} bEA^uV\D+֌hu0 ]8UA'eD0_v^ى(=Pc'=;Qնc'zWOc'·qZQ]*Q*p().gy&H@؝ N8S$&,{DxZ}cȄQFGυJ <XC#w&@)Qy`_v;ʢS5θ@^\tyMZp ?O2,h3/ iR=l󵌶X;tgXvg)cOPA /US`\-)DuB*,H,:U@dw+^R(hhpon&ͧF^@;>,t};44D܉C{Lj 4޲ш4p<ȯ_|7y$=ɣF&n,VaIHeӹR7*T,2%xD qlTCHu:thJT5 $( Xʊ+(Q{pVQ9e kviZiԴְ(Eu_"{d`w I]g(fFMR= KR}COR2%տ¤zFb#2WY`\c1WYZ\e)9+4W*61W(.'XUC7WYJՙh.09a2J\ei\e)Av5+np!?÷T.0n.VW7ױ8V0q&$_} 'UӀ/3@?9aJK}-#RF)GckmH_?{ d~?K aB?%FɐcyqgI"%KL~\FN߳KTJ+Tӂ0žU 3|DFE]cWWJWra]]+-IB?~~T[bVϢLC"9B f. '",Yz|%z&8V"AbH4F#U4ƑE)SYARo.S )Hv~GI)}yjمepDybT$v&>rݼ}w{%VԳ=NޛI:;PJxSHIɢqN] jNhBb̥ǁ 2N0IHfy)Oa[BRb$5TX LjX!dP2Q7F(eDo񍲤d|2Q7F(e|W I$IHg,GoT2I8 Iog|o2Q7F%zo2Q7FyN2Q7F(e|\:e|o2Q7ʫl7FYƲe|,iYҲe|or“.p9[LqMؖ `0-xM_ஃ[ J-Ѣǂ M., /*2.#ͿS[|pB5<~b+t@Hm rVk/;^sXesj;1 lU۵ܟFmdiIi6w5)W:.IiߝѸ~^\$XW/=5] g<Szۙ3] 1_$z&X`T9F$㑅H>HttD b FрG!eLD{YoMا߳ uFY9I^LnʹEsTbr^l__=mc%]&Q ,Rg2DO;hL@(|,s$#mMZೀNC!M'`͜ X!]6XMfnQgL\\w3 g%A,O,kkb(yۮ^m,;i)D7-m=@hlimC%=n ӬIϚnօCWw"K-W PuA^CQ,;4L6ytqs-b̺If T³A=I&JAIcܮh]|3k*--։6l}=gn^SFyEYQ|?mi㤅W ]jW0V@M\o>LoF;4UOwѪǪR!2VMG:񞧐B^P+p8|^cK0, K8qw[0I#6E(h,;L8;rpvl ؏+Y]eO&,lU P`g1 Ny E 2 "QHѮ !|Qi)sCkӣۀnB A j"1 39"9+y3BGszT"%1dNF03,*xN$NQX!FE h.ڮmߚ/Ҙ1X:Jj2$0I90ӣtR2 4PDZC 1"b aD+$ "!I +H_+\  +b!BFj4`c `$oѹxMڢ]]īaWoߔ ™`vaqwIqKbb_e6Yja:siUY`y3SM%l4{j"sϑL{_NG mL!bNdmt2x}~+aUiC(t a% WIyrOvpq8( |ŧ44>ܥ^ȔJlQ7Ou:;NGK!ESݩ۪ 9Q| ċ71:qJfTz6Νg[wi>[uUw>7L/ p\s\/A..z1KDpʹGo>0xrH5ꑮ!a#a`,U|8&zwݛ{8-G%hlY" !040pR7NR]|;_>T5 : ]; >?1Qoa4Tw&ᷝPxC/qҎCxbֈo3j]!>t1n|-OoK1'8-VM{Jz;~ dM:3lܣ %p׾T!|J1iF/o>}ˉ]oztך!_$ʝ$j08Xi2HF w̮ {(fG3ؼ 3W׎2yHJb7,`BN/NR < N4'JdT33Z8W "ZLҝj͕"{9Ez&+o7Er,"p!Tb ̘מYXx|T3`9 qpY/[9- -AƒEiCS{p<8&s3PKHʭnT!" Hʜ4 P% yMZaiN큜Ջ`1$xMnw_E?-9ic-LXXHa]ys9L"Ik'8!-" -8fPj=ȷx-)֚#:1"LG Me-1Kqye4zl5-<-[eeJ7t|[& 6Ʃ 8Ųn7{dB_ߢ eDPgevC*}Ѵ\9A. 09QWL26c])]L6zfj]T[Bq6@pnnMr=?5nN~ v%W7tM/ yהgM-7 -ljߏgqfvH^}\*͍]Ӊ{k wc/l`igV1(z}ww-( ̙%! >(!1%,{ -rs!Np8F,BpH# be}0z)#"b1,!Ѐ2&"ݲ9[ck\1ܾXqsM.vs|̚]5Cwx s3lW߾eOuqt .rQ^ASn"AN"f4!hEN ,2Ҫ;y-8aa(3*h2:ڀmT T*:dZ,+ }M|eqڧ\f"{,%\rR^K9{)g/e먶r:@6r:@N9 toh9 tr:@N9 tr:|sV$d=tr:@N9 t=G{X~[)i0. ׋%RH,0B*JRP5hoY.Lљ)n:;ksx5G* K`"g{ bLC->O}!5$Q0/2EjKj io)"'` nېl͆LRh%{D$fm$9t%=p膾CN}V%ݚwӅrn QXA8 InAyVHJZdjUVrXA,M` Owm#Igq`sf7 Y,O[kYIaYe$ dK]lWUVi,x:Rʃ#`zqyֲFΊa]w2N$By@q*%VU%j#L9EH M|CB>QMuL8Vq$\@KeX60Ԛ)/05 D4VGh|\.Ng#FEI (`s$S tv"C%*\퐌ބѮ"?eMk'1؄R8`PR8J)ο.0)R .UӰ\q,13!Z 8;s-:tYLjAu&m=ݬ;4tP7>Ϯu8g#c}E˶v~v8Y;@}_FM ZGPHx3jg>LLjRFXpp t570S^ z,?{^q><{06w=CSÌ Y iXUVYxd>ZvPG hS\-&x#32x3smXb;ZQT_͝$U7W,dS>#|_b逪 OP K=FMґ0хTm}EY*=Z3 r[ B{iLv/ X'cHƴ (QHLd$&A:DXҠ8KFbRN\2b$N6)i{-aEz K>m)7LHƎ9\1΃ȁ85i#ႂn)6gF\s6;XpR2S{snvt^Ivq\Fw֖∶rx3 [#"-+C"9B\qkEBuz٬zɭivP=Kó bvbPh|Pz韉&y{n7Vס {!^N=|W<6o>XKKwQ`tY'7`"{&CDYOFce(D+Dd%0liCndQ@3G=[)Z5#H#oLQSWOC ngpOryN.wG.KI gIh\mY:xaxz^zav)}'Զas>NqDdmqIAn;-KZn!iu_U(װ X|`a P-(ʍӷðAw'0mQA TrdK7,P=q~$={e0Ư&7bUAIonhb ˯x2 u" tҡ碯z&64kP^* oKbo@.w 5 Bl4R(a!^H<`YD2EոՒVS<[&g])co{va몝>ݸ)]R9%Ls$_[jlz9s:bG%β-56g;؜Z"|ߝO&~#x ޅ$4y$9,XtT!PEB7޴z]ݗhFs iyLAa! rL5Iik圕NQ́<]5L.szT"%1dNF03,*xjޝZ BʓH&wwcz TRF˅u&Er8Ɓ!YOTSiy0رVؿkk-ޣtR2 4PDZ!gCXwXp& IH=N9J!f-G0ȧRaq>XZ4`v̂q0'|@kRwW+q4ޔ ±`駊Ɛ E$28;Jg9;7o"du<+?T]V?8׽@/ق#b_EjG SDbMQKۿ=,Ơ/C_J`pUtb `Ro .`XJ.C)֭N˘4qP/ӵyh|ZF.mveJ%R(_ۻoZh|yu4)1( ~Lpп,t1w?pZ4JóNigՊ̄H a"#ibȥ.B6YRGgj_RRFͧ|yU0yKg& Jd¥ng6fV<^\I]|'A/΢"t3,UJ]R(%mS=Ҍ  ܗ1=e7tM<-uoI;I`fq@nXPؕb[*q:C /m輼_* ||f^F,P;'FRΈ L [)pSQHMHw)5xf>F}TύSg(]Z[_w:/-U]RKx%|fo({ 8L0ǣ~l!I{ɱ6$Qs-9˝R 3+!:곷ٷdU`׊_{Vrv!H7o Ry)GK7Q=D(\!r -%& ?}ցRe?g~BCMO"=| r{D!XD6B1nV^{f &d.+.{C gJgp7z|)iȻ_GS|=)\G, e}+0iq= [%ӇjNvעL0()eiJf!g_+\wC B@`JIh=.k[y,?L6lL^jm7{Rw6qV}q}Yo0`&w6|f|== _~ʪuu,>K9"Gpcv;MIM^:U?ٯ3΅#u MF]Ew-u**w]]%*5TWiID{jytdSWoQ]q^}͠}ëY \=?2WgHGՆ37|YFFf4^{CɚYaM:?z/˵k Ƀ?t )B7Ve+w _润|xR's2F^Z (y|.u+S?YpPRj&Zr,}|8P/CVC]zIT);+,_xpDxf`y$ Lh.%ȹ茒[*w/ݹ}?C~}WWXqb]J-H{ b`b`C$XP0T¬N/Tj4v+8v)&8 ny+5rߊ{vG cR(*-""7Y`$!$.y*L'TW2!H')$) gH )TK NpXb7qK!-_^XEq$\@KeX6RK"2e% )d4&jcݴP8%|\ukwSư`ED%)^s$S 8+ۄ MukMpw2C26 Uѷ$dMgTf ښY 2ͶiFT:xa?LiՌKy+n%u~/Y QA`bҹV.2(X[ Gڇ8&&VQ<*!zƌƁZ 1b+C%"2<(N#?{F俊~fp`0pXIv_,%jYc)Ȯǯb HSo};Z:?ORA yv)mzzQI/zu݇Z~&?Λ8UfcECd`$Xd ګ@WoP_HA8׎u\`Brɖ$ohSϷlKL@cyk%ٵYV}kr8/C0!P )T;c,{-#\MFS`_vyQsTL'!#>{d'3\w8:J(ƊiSYPZ*L6{>0$5Ḭ9N`.0 r# T)q!>o" ,7dWM~1=#/&?:w`:h;I܁'y\]zX|_'k䗺|?kB}w )Q[M*FfdT3sMXb[RfMniMλaxr;q~ɴEBE# 7 j-$* D{"#a:0- 9K{Z+$,r!c)f<%  D96JXG0y4!a6ZkeɘV!!: $H[Ksg H\ 2iBj[Ik%}Jڳ=,\EmQk,/-`[`9@]I`0;{Mஃ[2J͘",Ub0;Xpbi2[>y&ۨmjkخ<~*O.ii1t 7l`Uȁ"i!Fjq! "!غ\hV}t vzL4Ilָl4~އK PcŠ} $t> -}q|8L rH# be}0z)舉0+CʘtcKYoMV&nMvoQ.{]4ݒi _ssƿ"S{|(0 VnRnVrr |+ ߩ#))f8]ld@` In5Ӭi rrV2WHf+`N0v7; h n.\ UNTJC"L Wp.gQCͣߜo~*#<>HPC-h5!<[%eӻQm XٲTóE70An(|^OGPSt~Z>-=^ Piʇ @f<kpկ&ͨAurͭvUع^gNiD's^U8Pas% Ш͘$,b4Ht9;,~\ʯSw.Û r=,n~"up^},9i~6 V}tD0:9RL݉L•ߞLNB_g_%P0 *:mqx0$!\ sg,qPL/;V[~н ɕH"mor-tzڝ^eX Y'nV0dQ|71_N\L9Js#L~ܙSٺS4O'"gU{C"0[A L͈2{_kK|m6:4Oo0x rHՎꑮ!a3a`*&+>o]&EdݨsՊ̅dAC%#i`إ'36wAߩPk:(: ]; /Kٿ~w|>Dg?'Xip)I] RL;vY{C^񪡩byAS.|qUSw3s3 _:{Bn:ϭ&xĽd_BI*S%.0 QK:H=>ϻ"p_'K:zH]oztM<,I;I`fq@nXPؕvLŌTHu6vo2,S/>hey\n4nWᖇ Rjv%`\0s,G@\cgmI gZr9Afxeh 68G<Ȩ-* Fk *DLf"jLi foK4D93X04L`D$4-J-1mǺ]9IILh)& I8ќ(m@Lr#A'H胈s TWKgyl>üSٌ"넗cZFK`Tj^y$<qPXXel-Kݩ떳j/6y#Z|Aw<oh(jܿI7!07Ÿ ^fQ+1Zyr[Uaw+Gjq~>m"Jc1F13ɬ`)$ŤEa1H0BtD%4eFFg# H\^cU7 %p`rƶ^~ݼ5YS~.2\M]USvnmv[oƱEGBGcMzibIf.Y/D^rÜQf YcI"!t vD jHğ 1c ,!鲂EH"6-9!^]q'qǴ6R)3&y,b$.XF޷QsX9 i99V C #Xk"4&C0!B*C6,F佖豉hj40"FUB7t|߾I6Qd46 2ĒyE >E, sn}&DX ]VxigYrn,rDZҊɉld\+RMN ń,JaYѺB]{%87b^Mz^h |q6W7tM*/F¬ՙf5{im[ ygGvH^)T7w]ӉK wݱsk1Yw*1޶~|wZ VlOhiwGQ`,IA )=g1D޻`\h㜧$8ѷXnU٘aˆ^xd!R&R/5eDDL 4`A8H77FΖ/$b,ww4ݔDܟv>#]Bwa󙹞^mW'u܃>v75;).x8%=?{WƑ-^F,dol,Q"R UK&"eK&gW5Ur7j1/9d&?a.(e #gA`zqVڈ/+0dgd,2P57ۖ\g9ɓi%4;Tl:_WJV>K7;`T3 ;׏Y K|6]6 *;ɀ݄ԔܵV/~%l3o֫Gx+Yv> 8W.З!~"~}{mv7M=&`PsLr2ΘB2գq1.'.Rwna`Q1K6f-|\ǵTK%_*ؑ{A`RGSJmΌ6>)N+|"y vb*/'W2:r'U3cXvu/)np09sH[G‰ʠvZe={tX^MBZjc~Εyӭ{o4d7D 3n\_USLa|S 13'oltcT'׏ 9:@8:?Gg3QD d V- xZ-9RSc-e`WXٴDه"BTnJނ8CY壓QߍWrX/EZ؋ia/fz @aH;6є2Fnk0ZSf%I?ײ47-~:JF >U4vtsYڥKZ?|3.,JN;-b{~ηfl-3A@S`P"<eaP}0Hh^d*\(i8痟^𷳊9fY^yJw%jo"3v&-KT03tp G8s aDpl&H8$zc!C-H2i*0 XcRcLu zS=(|I֓tX{%)<Ɓ Hk"h P~)䏝bo i $ᓦ'& Nx, SZčA_@+4%;N!(4Ep2*&\ZArSyy1hclQe Ғ쭎¦/W h;hK9&ir6+Ͱ9WUb70`*{kʾ19\o|cnxkC\} ‡ܬi $I. ܻQi˵up=~.LD_+m-g;rJi;];G0'5s\5f6ڹU7aLJ*lb&uWp WZA]K&4Ѹ~^^&G_z}W(tɳLMqe>e^To]5{DՠFV[W̬+6ZM]K>e77dl͔ ̌.䦓,Yydc<`hp%J[!"Ŝ(aH+Y, s\$0wA ez-Wi(::X1 `4rEv;\"`vy8\; m V'O,SP'xy~cj`Mw6#곶ԶasϮ'9-m$UǏ U9N[pwn-W1,(쫿ftg 3.3 1{M- HkXYW+T53IUaomTxrړAVOLQ)Mi{웇 K,׏li*l։6IE_{h]x6G9Puo'-9NZ~rB"Ao|eZ=i+Xk $hQ C F#x80e:qu.xLf .5;ַlT{b7Ѳnn֋^WuK!?Bj@KCkON?e]CY@prP6gN\GlQ0Yj@t̎;}+1ҋU ;N,cƟ|4|l5F IRiHrР TT N1LC9@H j"Fv!8Q[JlB*85*wܘY<cwyuqU,0[B LM1{]^sKtngD`8~\9Ln? {'{Jt iF7w3,o@h} 2ÿzG>{7i+A{kX"S! 1v? 0fIm 7BxhJ9kޠ7.^|*~o?}}7;x{q%NHzgj9x?o '5WwM[ߵ.35j7W9~[%SlQ.7r)~YI/f:WA'g1$v# @p7P!K/bF=ҌPEM>ᢌ缱WGhB^$Jr#̬`8h?[)pGv%o6iJPc;oÓ{jkGWbAlt(QjH1IA@63y5A=`<߽0Z-f_ԷOoW;U ZTQn_Zt_Z/ 4[钲79i] /#?{ 9ǑAF]"N^ gra+ ա +QP  l&r}.BZXhXWX,TJg]m!K}!rFb `hn)1H_w.nohS ;oM#VZ%&&GRka}p^$ŊHGy@hN6 TwȠua-N4HsFi㛧> <"ןxEV / F@H52X33 sf<`,AaaV26WAh8s?|Zs,;@%)oNIGS~-x Ver9se!O.#;++,nR;-xAq9k4րutRDkE>z'AURd ,d,)``Rn&x92J#(ˢ`;8cViL6Fl4t> -)";ʓK1vߛ3é֥.cA@L$ OȹEpv964<0smCE{ |ρeLO?y-Ĵ\u3cn#u2;7׽w;{GNqBZNENG,8fPj=ȷx-)֚OtcDڍM!D!e!xz{ZFc_#^ˈiDk45[-#0H7CqX!7NYܥۇp,.3W!l\S,bYT`\"xfB6.U!˥S+I+ &'[Zijtd?[].Ze nVq6\;]5|!ðsWD<v1#5]6C0܏6իo~C]^>Da9A(A )7 'W\KV"6K*NcF1N 4wFm6*GT GPB]n#9vUd0{1X`ؙY.Ol,7-lQ.TAJE2sHƉokq+JKݹ71bN%y,, 4gPLϓO>-Np; t> RAi+H: yST=.|gO'/cGɘ2I.JL(Jޅ)&ڣ׎zNZ'ˮˉ7ӔUũ)_yYUX949-,$ ~=&H]ĝ^ܧ.!ʈFFљ+WR7|sk RA' J;/Au*)~[R $*J)'O6Z8;/S dQUr.%Pj4FL9'rk;)K(SsO dLu3g[ k Lf5Ab8Ӛَ*Ԯ&JkN?Q((x@RB`!_k:  ZyMP˜X :u[e׷[rJR8!"q$֪SuMmC"$,XS/r4gJ/ij^e}|LVްbT ^[VcbsF'R6Z&Xc'cOӁǚt܀S`SUsQ:UܩWV9VzթEp0(~ڛI^*k΄懺y+u2Fry,bPhˎaw0  +@"B0َ;m^Gގ5RMiJ|h,Pk+tR ;Up Sf.E2VɢRrHE\ǽ:c҃.+wYwΟJEʃ08 8XOsh'3ņy(8o?R]0CHRx0!F m2^}= v+lʡ / uRj<>N{Qs_Wz M&Sֻc1'ĺbdR?qvr}O>uӨ}{!q93 śZl)6Y! 9kTPT5UFFmMI ľ6%CO5ͤ$-ȤV =S:263gQQbaX"e^0:S}~rG˛7qt~2:xD&C*Vj(Mi52I\b$㔰Nƶd&[M80Q +Ve@ $2e*vEԐF( %Ae} ir^9+ZgB%]FdJuBdFe#q@6>ֳ 0ziap900!M"0Gc@!%j JF5W$jTk O=QZqLx5㫇wNlSE!W4kW$p_4\=JeO+\깯J PLjJn Wo0\qF+6J7pUU4pU%:pU4b Wo0\)[7l (\U&\(PUO V+HbBwSEgW]:]8meia0‹tg> ٿ ϦWK_;τ@; smQ!UL:10)N=LW+Aa iFdlAZN=\U+pÕ ]UpU5fZN~Ύb Wo1\1bfWH竫?ǟâs>\/Rv*=//*Po}^KH(}%*2IP ^H^y "JLh Y )ܫ^)=lEMk(5NR$KI`RT"B'l6& M^Xs(<_5aT_>M?>=Ik;Ԁ?w/VMG_O4o ?>Б M1.vD緥 2~\9o-.dPSXFIѺlګ\O<'.y>6Dɐt֪]:QߠFH$DҐ^ 1bb]+|yEy dXd&W 䦐HIPLr>'QB ŮF" 6_00KƐ9!(tJ{iw73ooX7u??J8 Z?HH%'0G!(Yf*]]9VD#wﰒ;- V!vV2U4+wJ'W*hP ɧ4Ei?9OR?%w2O\zY=+"\ [nwu;{#7vs?YyW ;6R1^?D;zn~ss^l5"ln ʎ\j~Mӝ/~jw^rs^]mbϼKIU{.nW| N[y EZ|m ׼$Yi1'\w*^36r#qs@7#E_xt|>-LZH򌁥SZY-QY$)J #w? wG1rhqgҞ9( sPhKw66UEm(@*iͱ#"(iCʾdBE*KAGa/5ّOzzˆ{7,zcۏkVpK"omŏ7ypI! l/",9 ՅXi M=8:I%NJ휗>+6;zWBjpʳS(|cnfv^^W^z71bN%y,,44g[s@NB; 'D!$(tM+\Vd'%IEɐZEɻ#TM{1pI63gy\>!rpyLS]W_H_0g=0!'m *TCdI^Tsa0ZN]ZT4^ +`Uӫ$O }z_ :a]P:d`y 3VAYޚ~}HN*9T'%YЖdو<&O(S7@G4 tQ%YJ AۂH[)RtmiDqɣQej2I{݌ dLuFζEn ,jbN>'iZ&ɗt}5W g-4WK."?ipb <ۦ?ac=uw]4EN|I Zgxg1@t6CTC>ϙ1H 5T[oRu@0 8Y J3svܕ3=<%:ƝH`&b^l%YKK8c25eg0_|[>\'\XKB,<"l>Jir`8>$BN‚5 '_(IャtZV>ۯf~G;%U0@(,F`#K6:IRYbPJxex94 4ǫ?Tj_{ ڻLчzņɠq8V%8웫I^joΤ<,!<} fddJ-;JQ.Pp]a"B0َ{qQ?vs1~y.:zqlt[tg۱:7>j|V踏yޖS>k4lbbX:*)O"ªd 512]A}}@ky/RzvD*3Rzxfn׿|+2$͢|g: -_Wͯ<&nӻF>_/?]^}\}+`B?5r<T@ŏPW^*7>xnwW|pyӗ[=f݃MܣȻ.]ev֢VDwGw$F 8}.Z#Ux+pۭtڳ)\gYuE9>~B=u'.=늀SO48,=Ԗ!̞')Oùxܶ?~lfvsh"zYf,ѓ2 "pW| TдU(tXşIO){2f 7)km evәorXuuEɃ9na+i#O\аӥ%4*o_/ڹ 2<.g]xC+3-&+DA!g JȨ))D֦$pɱ&T\s Ԋ!a-S[Gfo#J9J,l3 Wi[fpX@gK~ޤ1oąƟ_# [XO TF4MX&pSº2:]b."ۂ=l4kD-<6>[1 Ȕ)lؙRQC5vB>C*Dd-P1@C%c *'QBVL qT9* Y@gmMV=gqsUgE'#ؙj|@JS|EQǖj9C$ oXaE$3t3t-}R%O2'LbMԴ r PÁiuLT뜰ړQ(xGVv!ӻ\})gZhP>9,,dArAaRNz!ZL`Ny͍ = ڜ)%x4SUR \8W4ZdgP  X8`kg]W;^bH`QPD&)n=jy2H+'%~dUV:)9 V_0{A% 2Qá1I(Die.k.#=͉~КsB|IBju@|19z7'bwG5E?^Vg85-M9(!/5;nt _U aσ3;&gVUgGkfȠGj8 2gHf FC ;rŽ.ѩSl/h6GxqG-~kI\UG'$˕DȀq-@Kr П>I^Aq7>5 zI̅:[nnvI4O?^|4q-'w^nIƑ@iIfyc!.x$,pe2YL czDj_~۩:&l8n PO?䟟;>PO>ǓϟC'y9S <_D~|3_?++CxC&h[O]O.6a5o-١_ܒ82wps bNZ-a\Y\ ,d#_rYԛ,=,Jٗ3[ʑf|Pn@%'Cމ{Ȏ]kz}tM<,tHIRpZ,ѐveo6M;ouٽ WGkICThKw<DY0AIri4S dAo]Ύa]jmM<2juxzSG(]{ZT(7/{WK_2D(8@R !Ab>LPz"+P%Z3+a`EBugo!6 dg`؟5x)g,>za)TiKHU?ɓyR茫UD?u&4b_xWM> |W b~Vt4A8jF&8?QkPITz;/J$HYUGYY.D]w+050 %$ YGE :`m+yޭEQʆZ>OեJ7:_,(`wU1j:U=0*oP }2Wzk]ѯ @-,;:xzomIGYh"uqf I SW4"S,?O;|v>$g+mr2!DD)\))DyKɤWj<fjt(n{xi9GK6HsF}xfļʄ!nTBŘK`8T!1SېEvVds#9 vpmpYWP[CALΩ{s/_(vep6"X " -ZD)gC?@$hg2Z9P*q@ Xf?(}JA2k ZX{ j:09;Tc'^?n^9 fnU?WEnQsi"=mV]oʱGGBGJV0v nGbhbhK ytVU(9}rD&nL"AT2'uNc!5:>I2%(,cpeC Qgt:4w@,l lsQbr2WONa@Nm@ $l$,Z$h2 D)7"~}E H)]\Q`/PB A NQD@9f _B=}P9%4 * GbΣf>ץtirSq1ʹJ&bIrji>'櫪f{ќ'!dRr^^gI u#&bBUeO4Vu;ī7۩Ի\T ^L*$WRW!f%iU~#BUdoigQ\,SUj - ap.O[)S gsSQ8qzŌ[}PF J=>(&2` ܙ!"ODA!T Ut_eW+(pܜY'Mqϣ ;RX;h2dZzr{NekmH /e~?] 85~JlBUE.qVh \G5rL>0G745zl>T+c- Q!!8B3uS/IN MIj$f.%i5!c57J!%%B`{9FrVd#%+QPrR|\`{.Nǹ85i"aZ@fhicF&zGk\EF~|sd ?1F: lp6ߜ M|m֑'Hky0ח  [T!Fr:3&0"`ZpJc'8ӬC'i\?B\+Sٓӗ@FA7<lח-`ގx_gOgkՀ܍zɸo{Ei r蹄؞^fNrI ,Rg2DOUd4^| JBDK9 Q Ƒ&&V@ӭٝp٢:ݮ|-6I)J}ɫ g\ӑB2[lxYZ#W# V+/,kb(xy0հ֫ L&eR=@D6dmqIAOo<ZRt*\1[qBue_XΗY$Tc~{ E90lo=GϕTrdK,P5j3{oHϞ 턷cJK6QdqDs\MXټ,T3։&IvE_Mlh_P);ކ' F_\v8=i+Xk $٤B F *D"”)ƍ8929%&|?ܟJxK5 N^W%O!?MnS폔j/BͧOx>GuvQڳ,emƜ&da >UғMD߉&Ϸ߭wx5)γٮGsd0N\HOJCGb 5LJ E MMaQH,Mߧ|~޸ 5eZ[[B A j"1 39"9+ywkB]0DJNcRɜRafXTqL-!I$ X0ekɶja݆ x<$h(q`eH`֓@=8FTZ "(5M/׶} BdJ/@$ 1"b aD+$  H[AjVv sBZz`.O= +|n!Z# i aN5kRwW{?.; ?xS*xO{1څ~nmH"_WLpZ3w4scmT4j0 "BSEty}}},9)+pR` 0E]\KA)yNj| "uV] WE-6/}ҁ2$2֭箊*qPOӵ;V"J׽ {S(bDNWh;QbHQxtm5 ɣQ|ij71 ' NY Z?!f}VLH&2&>\C:Ij/pgXt6oDO_ ܥng6fV<^\I]|Odrt-UJB()mS=R   v➳cMRA$J^D fVqpAE%]) k(fG36ц zϫrQ g6:eҨsb$匘 7y5A=`{9QIFmWQ}+(gJטR Ri.<Zx)ݕfG_h{QٙBBe,YJL&0"~ J;D[l7.@8#)ް >8EߤD8I"Q'B2HjO5rthxZ;oD4xHwrVkgH߽~/cZFK`ԻYy$<qPXXadl۷ c׭ӫ9:?Njeib<o`9fSYv󪄖ɑkpÑOo |NV ?߾)O!o |0LJKу~ w^ ,NC_@MTQ+P~Tj>䢠L Y:68Ѕo e" $s!gpmVC]e Oenk"XB7Tn\ˏo-bxՕ_ɓ &/jrSSFT0A?vJPYDP{y3ֽ7v)ɑ 'כ[؏{Ono &*xE\\L"e~ty-ɵ EK8@$ i N!&TEpC)3f,5R1G EέRfQ0A[]$֦@B"vi#gK :[ЧQ)"g侴yL|!O8jS9(`3~ >Z81Aj|F1.|Sq"s G TeGj (*Q )Sa:e!0=U%LvWDLn߿}c`bҙVgpb#Vryt;LJd˽Mpfʱ &uDfhicM0&zzo%oS`D)Ǽ񜷥O5fZD1 '< "!غ6f~S{|H4EIO-$-dkd@#}y,ɇK P#PAGXo;hicl.i QY2Y+냉KJ EL  4`pHnKv]N*݊iV޽+i+}3Zim^#7a7K7&n%sd sɳn%FGV(^X(v|2QaamWf'MD=ˤ6 {v3003l6LW.d݆qNI-[MJn\JqIdiJfar›P-5UdnðAឋc>W"fVL2L,P5j3{O3<6 턷cJK6QyqdqDs\MXټ,T3։&IvE_!z3P;ކ' F_\vB=i+Xk $hQ C F#x080e:q[qӉ]o6;6,pla>uM'Nג=Mnkg c4P\<鼳]z+٪q?v v IT<\kR:ma::"Җ9) v"a堼&S&aJ;f2RʃpʵDX/5^ ;#gSXS>?F~)@8'( LkQ>?s\lY=E+5O6zSZۚ>r:DED A j"1 39"9+:mt$x>\!ȵfRvdфuä")9IE$7{WFnb (ksh+ INJx%Ţ,%Q%rf~‰dMFHy-zwAX/(\0&sP17n\x(d )\4w #_Kp-,SMO6apI_0CfQ q*Qã1I(5D DHG;F*Hq(!{rQHD{)B5,S yK,bv* ML]+vw"j:Q7Y7]A8UBdI }9I?޹a7y~V5\ brlО,?1o{ ɼ5 15h)(+r|甇c7O(/a`Y8* j<|k^ +|Any뇳Y䳼`db1r,5`]nzW"u=6_-NOOW'˗* S4a^8=h0iz&5w>`|v 򳱜3zy~mZ^ ?^O.<_f#0 b.0 vn6z$o1>C{m$MZG.ۆaadQ-p,f->n=ٟz8ZGedlYaL2>HXh\'%@ozYlc5NMYqSkæWp9 zs|ٿ~ُ?9̜=7/~DE6ݚ_"l3 /7zi%m947~hKXɧM&㚒2oǭ~Xܚ8{ya? 9fWV{~VkEπ+aWldER0EM_QtiXfGd!GA퀪Sos~Il]kz}tM|G(΍$F$1"!(!ʻh(l=4w O0yU>q#C24§sy%sK5hezDlvwN Rz5eG}m93rk0ӽqL.~i?KECxk26bAbs#J|AF]Xwծw>a+oQ\-U, Vǟ(Qeve2+ϙ%LO`e6Pvh߭Pۓ=|e`R$PS`8IGTМ@$ Ѡ˝P$w(e~*~!o$'|ԉ &, iJ6Lj #MQxT<1(#*J`!b1o条owd4O%]rˍ] k;(7pMq&'' Av /,`ńddMDV ЄDlY1aNsp;m!Km_7HB%LZvޡuG IZé1ŠQT@"r@bD.%\H^'mmeERBHǐ*͖)JJcbLj(;9r̊ؓ*W~j~F.#8>a]&;\ |tsO nfd5z(L_!1x9WpT>;j.ώ^n:TKo#9JS_/^"\a;r{}OQdpڹ7};*͒YS䋚24κ|}/-Ŵ0wcۭU^-fN{Kn^sYy8`歽3'=z_kvMc񽬞{~pqWcft=^x \[z|ŢE|y=j0yue(2UV^p04.<;kPaɬ J{ɽ'&e Ir#q`ONe @3XRlbe=&(!jٛb@)cQ'ehuSblx߂ɮA{g~*aӨ10FE7BRlt؎lGRv'IQ)4<2:pBFr)0-Fz:9gŐsj;K*Լp 9n+mz?|`lXg-0.  J,\ /T: 9޸vHu %`\啐HМvIH0R` R\0:8)D$"dI)EŜ#܈X,ڞHHu'u{>ZI.6 3I%PrS]H%XLW .<tZ%unsa/ޜY|*Ͳ[_o% Hpn<';2_VpɇS )}=Nm1A Thxh .VI)WQU^ڮsi0.hrn) rΩqH1^})rֳ(k$PҮ:ydIeU$ (w`S|U{QăQzBZR20jq-3k>re>TT2*S{bC%`b┡聐tLֳ UMPZ2#gd\R" Gxt,P2{- SOLn&aG~x F>|bJsJ Bpa#QhO9“#*ɠ$RR/w O!.2½geɕJ `وprblb‚*H$hSRۏqɸ<]lutRC#K$vIKmT\4$(^p>&I@{JQ}UoIu<ؽ#:@C&)ZGj&4i*AMiuKN9lLQ0WY邕d1Ջ?[`jvÓUS?bۨ%L JPWpgVD-2Î2TO *;kg{;I9/~ȥ'*qkQᔠyeК9Gewd·4i˵sFONOL.G#DFK*Eά$;F~YNEy'~'uqqh;7o,Ckč@K鹋8!"IG$G4S)&!ZĂLQ^!.Q#[:$r ;\Ո}ԖhIy9F}A:N9$N qL$JHED[[PJUCئ3mĿ LQJ.k8&BB5mu~bb$*qRP@ϼa:W7 /frR K2Rk谨|4_ Ccc4H;K*i4g( !d V*|NUbcccuO~Ӽ]gmh+o hTrgieOF級KH/P- ix "VR%V .LwqLc+Nm2K_| ua=Xt3nEk8@ -e2DEp6)Q<a U C}d-N@;VvA`@[)Juqw8ɵWmy7^ G-'|}7/96L9מA]' qqn!3Ww Kz 36KeMNٴ-'@wPo?POS-)_@Y̳OwNdw  ^/J'#1=)`6w`&͠ <(j7}'TG^=aߌ=Ob R[Ӆ [+c1}x^EX&oa_ku3\w8:" "aae:UD`BIvY&H (E.(qCvYI(J1{)vYbPEJ%4 ∵Q1/2O ekգbS|M_ @2GFUT^B!vu9QeiDyPdWy<;"P2zEi^(#'F-&x#!ǙϵJ,lmf< gM瓱pC,k4NէE<0n;R4nn3!0bGo$ci ci EQ%4~"g* C'ZK(IItahtISt$LEt!"'Ug=b~/ڭ͎ڙnU<\HJW-fXю Xr#ȁQ@e[YQ\9Dlk~<IˈH2"fDl%٬ l QzHL`u4€KFb] mGU5-kkQkPGqTwdAsWeG=9q wp]wUOId *)dFRK6;Xv& ܟHoeu ~G;o <@ô {9/iiIE3$k5awT$reZD\"G+Kal#"H.Wꩯ.V6IA8qݲۿH zf>L_ F@05Uy#%|x~}tt;aw.0I=-UJPOڲ2kl__=mCT,z Kp3"اUd4^hJBDK9 Q Ƒ6&-YA*|9?ez˷CarT]CZPb}fRκ6w[NݓY/G/+5sff̊?|`Yg mЫ=+VVV_NTs07]XvRN'-m]/ L![ZmuUrwWaQPxFez[q^^i,-]]1n!P/G+$kb:]O`e$P]p"՗ TZ?׸g*2~3kL^)4Q+(iy욻 M'ϝvbhd\Ut @ oE F_^vp?=i+Xk 4٤F *DC"”)ƭY\k>7rqڥMU}񁥐\^gW=+٥{/tmއ ؖ)IɂĔ#"᧠Ϯk\,4VwzV.-%pLkgR=ۅM4U! n7ނԒD-1fHs3և^'4iwӁ\wF:EcnuɦV+0d9L.U=Iǿz|\P+*0'oRY7y\%Lǿx鷏|:?VKDzk~JXoH38?OAE %]i74QxK%YgywWVj 6>,X;'FRΈ L )̋ ۘuZ99+ɭ͉g^I^p o߾x|"o |4Cс~ v+sz\2Vj˧IKnRd L V`rU=qf*lB|W$AZ?XE,`"UlE沓%c2ȳanó G 5uVB*H$VwnO3p56˿>v3Y)t"xp7GwKjP쑳޴f)ڗDުt^wǬ=l b U'{}UKcdn}w(/=hu˭^lq#NjYRIw_KTIYSC/Ng,i7ݎxdJ.c4\3[rUmr)p[X5yl0lE#V;^k)EL$"DKԩZ{IxQSs/9ٜ\?k-\ؾcWoތ7ޝZ; TMw &kg kxanJ0&Z 5e{@,2znxh}ola{-@tUsYP} "Z#)xoO:d]ڦbҥꀶ{ ׃ C6,h2ѡL6o'H|:\{s!ShwFGxĨt # 'p i%IM]fn7Y3kΆ:DyAL%;KľYXr>=2R(- O9EDA^kaF`;>v޶ylX4 "(e*:<ĪJ,"ɠ-./Vs'ǚ[]fZhhNݣk +YjH7e"``Ħ ec֣KQсmX #8rtvT[ZEQ0")4O[5:Ze R5!c4)bXF&Ed6gЄ6$_3I%]BvFhE $ؕ;J̀j@o]q1h a."%]/1<`pƎX9c4 QCK@T:\A.[:*M]{a @FM;S %E9x!BE=K* y'D)(`~Go!A!Ncݚ]RYFE]%׍C\ *+-]i R$y":(5K Pe 1P# ʰ 53X+=A xfapZ{łTU1c)̓1f(+.A_rwax \[k PcV2G@:ξ-*` 8DbQ¼.|ѢƂBązjɀ(A2Q*(}b1fT4o,Ѳy`4 J+&F@Qx2x`DZ=23.,dP?Q i qT# l2xYcB(NEk!}x:m`EIn* 3VLS0"1w@:]۠qrQnB'.R@9b51 hwc$r 9vر",P"x!+Pdǡ9ciܢe+o;V^ȁhН`$1)YG1nü6hb5&J`?WDAo<yg*8\F1EaXEwcA8gTW(! ck:ZbJ"~ M=1Cg60MVH `f݁ RԵުj{%EgL(mU٤ak WFG x0jn6e؀~?xnz|y2ov֏iǾ&L@"/^#Y`,6 FHSp8Y:Zܚٌg9&m:gQzhvfc-` _KHzQz68p0(A/Q[úSC \A7Fh~_;`rpJԃ`XRBTPA>rjo٪7ȰسCV66S]xr3p9%IQ^ѭ `\8R. 118FJ#5Hbc`Ŷ3 *W=ҵ-9yzQ痝vSU ֫«8dycEHZ7tilpyP/zCf<Rrсg9>X2'El9 Lr,C)=Iooh .JlE*ӥ1 1ˑ :f-Ф3'S(](q3F LBF :`i<];d|J= P DŽR۳][Vg]&l'jIMƊcu~>uׅ @=Lĝ'NcCkY]kǦt|Wsk\IlEG*>w%5t4J 3ۓWQE %*l%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QUѺxLJ (6@;H%a(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%K뽢KW?mwqvAnn^n?۳Op =uhK6'/\rQĬ~¥]tNIvwug ig7_?X]mNR(m7i|'^C-з~tQTGaUh0WcIZzV$i L[ U?tssXKaNt夫`upޝWg_x _zbcutBc.'W~M W`/׳=YMdjeلF]&6.WG)&RKIU5l ~Jʥ81'r풃a:}/ɫ3ց c ̈́vc;1YU/~0X֋O?hG7*vIU‡1mH}J9il2TrFrժGutߞVobp;= v}ǯ[}{ϏwxٚG?uo{^eyƛ7'#93?'j̱#sϽon?~_?6V:}nWos4f^]^M8q6fۋ˓n1w=/^/.W\NLc~/Nr:K?{|cz7}_xPQ #a,~P|U}1_w)<_DgLvTr1xV@ODDD7D#3U]WcLnTTT*`^5Ne lM - P-30$$A&mt<Aڃ.׿1Z/|&t󙠏wA`07cy&ԟ +w3/虀řN_`eR.ņ)jU' OE BE]Vv+{u*톶==4mzz([¾eyLCqũ5+;M>[[*w*׈3NxrL_.Kやu1s5{U#Ϯ.ޝNx]NŻWu1^]^W}.xSF~t^O.pA`eU^_ ͺt{F5r?atoq՟s%Q^=myEo?pzR{s}A9i>~pۓ_|KVp3jzGq=q> \fi; Bק=o-NEe7J$\\\[mڭ>'{mMB9o>cﻮȳӋ0&wL%~yu0 S]W:gz9_!eKXx۞icdž`vBW))DLVـZb%+"22`"Uu(cTrw?P뻿Šw߉;sf wHJ`LI9!rBK\Ho2 <+wFΖJ|}w@`]>]B0M1K85?e.dF ^0HDN0kb7HsM$Ih`a:`@"'@F Ff ^9N{ NVZ9qOQ~EYɳT-EYU,\h:DXF @PY*ROlh].$C\H 4…$FÑ$bDc B=ΚT#i 1̼NWJNYVhF-pUzAs*/0\z};>BYkXK-c|VcWLulĴx8xU!{:;^ЭPe cH,h`BT(?Yvjealirprp&wwV;8uZ;EY9R=!Q bLJ؇6Dug9ٌ;gin{XR B!M<0ލ\]nvYJx&#(]zMƖsKDbz IHAh" VHJZd*+9zfll$II8D"xjbL,r`\hUA`bs^ Ҙ^ PYJ(Xd d`#QDɢ0.T!)<;04EJ0U&[kއQSeMNhmܯm36xp Zȃ=ìfYzhZմP20K_ QA`b҅VFCXTOũXY`Bu&WAiݦpZlUVأU8@᪥:#mzvۣ0A0ɵ w+uK=cF@1@XZ",88Jt 703S~-}5 SY~7#Ww}_jy:a* : > Y6,r໏ݫ?׃_k"}qk0eYc=O@b#Tŀ-ʺf#_'q}zD\_g)tXez",+VP;klD-;ԣst4J )wBl&m2 0HE.:>=bbn=RY?,J=Ԇ)Vf_bQ> o.`ؼ9?@5ks pDB`kHXXȂ0R@xd2Aia9N`.0 r# T)q!r7Ef /0z\@j=Z\ܿ DкFhBT8 +TZynZ[MrF#L!-1\) .Zyj)OE jFd>ZR^(#TOp Q[M*FDz3˭E߳((0g Ѕiuᖺp,.EI.nYIn0v"P~{D7Y4\14RAJSh"ب@%mD΂۴*,C'ZK(Ij %HHLBDNJrYc/9Mٸ-= }YBYsZFĸK Qƙ7mFPyy `G.qVۖ2 hG]r: >0G:fև٬K>"zt?vՈ$F$Fl5so5!FHHqbKFb3@S, L_.3\vL@c"=݅uҡ{&>0vd63׽cєi㺉Kv詄on6/DYeܴrHE*L<) T` DWbi1g;Ja8Cno䱀w-[sW[t^ՔgH#6A(NC kgp_ƭ\6G.KJl #ϺYZxexW S KokrdJ7"6fa`ZqI.mj&iG!Vӭ~7+8\*AO}.wW8Œ L|,oq?CԜ,|w6o=nGӁ&iza-]@Q{Czf0o';q&:c<<3O',ob u"":i׮kr3{uVo E oIf/43ဵAb@M*a0B0 ,"Lj@ϑ.1E) Ʃ, }񑧐R^JWXy4sJLϝw7}"vZw6iw']G?a/ ?< AЮM%jFa?rVR;ޏJ-C䢤L Y:`A rVr_J~4>(s>Ђ6Rµu:^?2pE/w$q8& Smx/GߗPS|zlً]T;`\)sᗟ:~72:^>8Wu:hNhxzyԙ|_?=3ɜQwϢ}O,h.Ws:bSDβA<73s3;ws3~{tXaiv8?vyl՜)b ${JCG[Zk2T:)ijI"F$؄y$,ViǬQFYJy>%$K@bA6r<0 tfy-+w4w`|_X:p0 6)cLJ+戤>ՠkX}Pw*ב @4:&(fEwZ<`@P$@29f^*%C^AnF}nvHc4RNǀru82$0Itz,r*-[? 5Me*h-](D 1 q,~H#"IBR ptˮ ^)ĬE԰T ̰"Ny 5R̎Y0.R_+C8~_Z[)gNQī[?; οT<c ߽ڐ xM\e6yu4seuLi`9Dw-;U"ȑLq b3HqtD0[tv)338O@ޜ.f*aUiC(KmӸt a- NX(`X|߇1B_MuxܚR#~~:?NnnX NgV`<g0Ix؉TE~tQ;+4ݩ V]X^nr520[C L͉1_{]^UkKbmDpݤ,o0xrLn!ia`*&+>f =^ٻM{*ʾvVuu>Vd!$f2&>\8tFΰ(Oo*37nT;w{PG?~//&}{8Ց $j5x2>K0lj5"9z̫r>rͼkć.-ٞR(CoR,9ܨN q/g_Ao}޸3k]RGT(]bwK3z|PwEᾼ]LǴwUoYhM,I;I`fq@nXPؕb[*q:Cm m鼢?_kx >>Y/pF #)g&I=H)̋ ndiTg61sy>Gѿ}v Wv0ծGM(צ[>znWI\"}Sƹ`1X1c&%"@ F/pJÃ&N"CnN3G=ɨm&F5x"XSB@*ixBnHRk?-GyDQEjL\`ha)1H-h[(ZjS6.m{sQ9IILh)& I82iQQB e*执Nx-i @Gb*ly}3@?~6TaLO \.ޠnyk%_h%|{;K/ E"DtUmbבiq%#-r/?.cqclץe|p1L^XZͶH"xE' -%ĒY}KF1F+ZbN.keq+˿Z?(ˬKٚ8 4?ߖb%(4>"@ۮ7n}c$3x1q]6 G(3Yȕy:鉶6dTzl$FT} \8gA߭O7x6o˅Lj^t'y1lM ̒.䦓} iH,Os])0q3'`J뻩S6POzdrYN2WHf'`O0X֝6 ŝcG|cj`mw6n"I[j۰6 ?Ȗv6t\.A18)i.Nӭ:t9v~IY%Mox꟫I7=ӔfaȅLB},qNZo f33xT=ngrn:6a;{Q)ӊM"o{fv\Xvdi*-m֙6IKdU7YAQaے+],dS)qHQJfQ!:Z--,xRD$B9h]VJof 8XRQ?o '7F8> 1 !ׇj6:JM,r8 ?R!_tY>Eu{P{1| +LHDLAs>)v k :;H?O;ŋ|;d+j,.'ŕ>%g!,ӝ'"R_dBl\&W٫yu:Qz1z,M9 !\H~/5=ot\LWU\tjyޯ?}Ào*jVn5\v1qffph$B}!}_Ӝ ΨFC ;rRI)gv:np{6>S?$rUQ [sBy\ R׭?ԥM>d2bC9xR-R ]\ϮT 3U+rqzwob}#nqFUY޽9խ{Ub_KWݿ'?}?x V#(1sˏrsKNp4fVyXA~rH4bHMðaL06Oy.>n =ݟ\Ud娌u>ɦQj4i>Xs d&eiR/szF{矿}Lx8E>5${'.khK8|haІ6g]/660 w D[n)@ra?9v~Y]-~\Y]C/̿d_/o< AP烘B4,P. <5>&:Dj$qʍ$n)8-"'} A aWf= alp\Ľ[ym뼢/_;*4D6>]PIzǓA.U\d(TA&"~#w:[RcPE2c@+![SEwV+,OiR;-xQ B L΢uG@y$fԊW'92"&OZň\40IB81 B3⩣9'JeʔT1bklƈ{rhߚ,"y}0m@ڔ ٺG11575ܕs܆N$TXhIk:}?;v?v-$I*1 9')&ֻ! !R&j+,q:3̀9D) bTmAL&ʌdr) =&bZ[#gK}/Ɠۯ$bw&nU+tP؛ǪVGЗHI0"Y,1 D ؊X֪{CErY#4֣6r:᭶&H]Ҟ3 U*yb[,+qW |OM^|u:qMQvEgYɲDoYLn˧W|k6:+}B 8.5Ԁ ' LHvB&$; Yf %\N O% Dxgj@CJ5V04qJphd Tkq`G*!kqq,T4c< @[#g;^>!YK!KyPm`z1SIut3<$u,-5txXxUkB_P L0g/@&Kkd*j7K\pW;mR*N&RH2=dڒMzwy쬠s]Lܘ DVPɜ :9NCbbԴ{NC (Dq/U+PrF:{<.#@]m-r6Ѡo)^W^ǔ$mqpC@6O -QLr+eԁ%k4Q'pXn[ӱtrJ LjH!@ũ=n.&%eYnp>|eg犿+Z#Biqg>uirf#b i][o9+B^ ζK/0'3; pžlDXV2FK ʍr:'%Y&"!D I9i1KY*B푏S(7k6Za[/u/{aV,hrof5EƠms?  cl&t8FMЕpV}C֤ 1DUJ$tΜ(, I-o3ӮN넮q@ՙ>WI7~gw禇֗GepzsT*wr5fՓq:-g ڜ!0i=k0 &VX"qN AwԮ˲L׼fgVOcYmzO\P7[Х4L+|zltW'/wl`+zilcr OޟׯpBIfMldB P%l <LpyU,,3O0/NI`]`Vy) 19xHo(K* C¤lMh H ~ 4cQ:T ZWk۷jSJ~)[dqQQh)k:hC3 kcATΊuP*D$q/i],#zHF@V sk 46?fL$K[6Ԕgύ6qr8E?*.^gea=wGG_+)敏$"wfԹ-لļ@a'iEc3wM,YLs5%Tl#vJŸܶzh=Z!6bJN4Z+1e-eIe.FC,2ì[dsʒA"ܥDNB4uHol3kR1'|Ԗh,9kW]=6KKIŒ~hhjmR XA.?Zxu*i;^'k`tz=wqKr5L$V"*\ZK*p2r^#-X#JB/u I) z P馱ƋYH$Fh#4l,)Ck4<"9rDРש{aSP4pS1rV|ig]g{hec*!YHȉڬ 4ˈ>$T"9Qj1r7_dO.kjZ6Isd@h"24:U2V''_ҴٟzS u ?oMW._LYWj<.͜mFzSoU./ <A'bwKHI9 Gܐ( SLp%GT+) Gn2Ig~py4m׮5%4rXTGӽS(K~x?T4l7{9W}{zsJw1]] "k{}Z:99/pUNIz`rֺ;&QIƯ8yAćRoF͎w47,1ҏR?fW~m/x1021Tl1gKoziߢsi4[-jFnƮnfy%i8jzbGUGO=U+[VrU+q$|44|6^c~{Nǿw:U6u&izq/i .o(H6[;{4Q^tK=_z%F?~}„<|h|ϧ"xT#,V6`ټH!g9.|~71[gEmdKyGj!6"I#QimsHjy ^F; )Kn|L.K"g\X%b8٠3ef+sj欩ޑ0\~#4sF}am%zظ=KW3:˟稀 CF3 Ǣc%,X,8Xl{U=8XJaHK3b eOgɩ z$)s`W3qg}~<y"WwBS])R^Kk)&rv;#O8 "2'4&؀0|$$o |$# )BB H "JdnIhd#jS"1h  %T%6;a#7&jsbwc#謏'8e8dL@8Qι>r."BZz[{.Kq>yl6Gz?hlg+hMs}t|4j1VNb.b*r]S GYŎ4 ȲA<֨A"!Ͱɸ+@0]MYd K9jDYDd "VrsT9& ĪrgA^؜5-@F^!EaC7Tur ֊4u?K{cɱs/$@0)wώ2|&Pq!~@FF#p7yѰL{Qm(>%Dl "d>y!JnLK016.}7w>eVNvh2 cn]^_2 jx)[az}2W >ِCi9HaZvc͝oyМWxwO3o毯vi>bggYgO&\<)NV_$2Fr˷QJhaSIU@9X*t^,n//ү#fkP (R˯\X(<8CΕR T!E۲b0#WcO 2Z1YJ˨hтW*YAHk0[ I6"L_}&UP' VTY8Fed>F KKVD5BTm־쐵=G]U3tdB;7`[|J j(x9[DٛV4Jy#28=c:}㾭Rmrn|48;T/{r}Y!\SK,mQ%q8YSA[ փcuPEtqB`_ckn3K.raR#~C߮ {MPqm %*rdDsU 59`TYt`ٓ|?>^{^<y\aܟ/wҮm'2ח/O?-<>W?nՊm:f,tqUz;<_0zOz]_w_0Ó\W?ȇ%gKe |ږжGw  gl3jASנ  *^X!ׁP$?R-slW 4pɟvUjpyo~yxEՏ/ ȐP+ ӮUƳH"Rod6-<ֲ`a#.'7pK,|w }yH7Ty֬'`ˋ Έ]3yd'BWjB4U(gDO2TЮp d 8gp,{&2 MQe@M|r(){YR4*  FWS*;LlFlKXPQ{dVC('䠄!%YW65'vT-f"A G[L :OCVFQ/XJ 8 F Ĺڬ"& g3NE}h.{]:#"8"Q%aA963i&x3pzF40yH*uA+<6.ymG gd)`qioX(i@okLw^ٌEj\,R\YgU+.θ#.nsFV[ҵMZYt# m` GhVac2`e<>pX]g<1e1I\Xƛu^q;a! r{|tv%S|&yIY`gn7GiءU < E7Du#՜@@wtP.OçN>sO (q/{Y>@K_= IA?7ۑiiGOmpp'Sֶ 7⩔Z厽mƱ',kIAwaxj)Wt^$ ]ՃJm WRМF-Ik1̗OJY/u˔MۇeCnEp/@狷p|8lshBF+jt\B(L!+T *v {}tb\^yYQEin]|uQPxuMOww`lnnj\}*.DDo9?Ŕ 8+Z*֞ŒJ)Aih D>uu--K[R>kݭLKZl.HzDy+Tvy73N'F?]w_UEhs@b2`Qpr(Trhi\*m0Qwx89&*j9nksMGUGL)?]ȠEv`\oupW!hNєss'uG\ˌ}檽XZxìR.,#\ bMjy [AUedfrx}JM46.}2#=rfWmt 5"4[%Ð?6VbqZ:;}_9onM=?6qݷyp{}ŻF~5U'9|CǙ뫥=w.=uMwe]O|o9ۻ9 3>,ʉM*QH[c΅ϱg[uvJ[0Xbpz M@U9kAs$CJLUNJU&PIx? JJ;p6S(tLh+i}^=X˛H}zR%UjjGw`?VpԀ@ZPeS)IcO*JF-WsrL{ -vtNe[2-3ڄdQ4i&vD9&$62`[x/|k'FQc*#?[~ʅ $Tƃ&^l O p6h9~aBErC֥nԳ.uR]Qys$Ap1R( d_."_/)bP"D9(UQw6llꊌ.seVl!WALs J 2;(,$+FnٌYA @{`ᮞi#"qCuEھٛ[t˓?k0;3bLYi퉴Е gDO2TЮp d 8gp_LE(2`q>9”=,t)bT|}yk8<:Fi&J 9(am`bHIVtUM䜨\3EE Dr-o&jġc+PdE%BrB\mVIDu팇݆~5 b+"BgDGD\=$,3}Ft&bBrbVG0}Vym\,dP SJQҀސט5͈x^f"1uv[%팋8∋VkK +nt\:0ػ8%r~ Nl%v)7,>q߿3/`vvNLtMwUj@nqDxf`y$ Lh.%鴡3jI\61ALw.jP- P2ǟO}!]E,FI*10m0RSWLcSTJ`8-R)+n'Mƶ~w \>xR7|{5'3主aa~1m>h _ Ԑ }[ʊ$]հg*]co-GL9D\J9U$Dg G!93JFDpl]^f{ѷ&3꽼\=[aDp2*&\ZAK@TS,uGp uFDg½([,JT o[g"ݎ9iR5(N-Gg$$~=pN+~i9?'#pGZ Fx!$L^ ;$@ č1'8hJZxbgPU OO1-XoP˱!MZP ,F [ʼaLHdyd =!"=A)hL- YDK9 Q tYkx&?TrH}[-iz]+h5I)y]|O2>s=(D"'Rpq@! D(oO3?}U=yvGt .l5'4J!1N!IHV#jF4Ouߟ^bl#7#k'/nq`ȓ(%F2V WRsJY^ Ƥ~'q/uZ *:v50l/5ēWnnþ; gƑ HP;#(`?"oLM*Rfͮa6rڪ=M4 .8LU[l/̢eZ  #׫@%U fX|ۡYo❋mt.+QT LR7qr fV{Eˠ_Ox7ؿa&Lԙ@\-h8Wܙwsp$j}e5r4v*a,6j|+q" CŜXic Q0Lk-RTɚ{mFpp5lȹ P<]DGJ6=[ !^H<`hYD2EոUz(ā.0_Jx㩗l#2_39{\t [;;CqÃ避oWa5i"0IQN)93Zt#p׳:Hӧ)C!fH Rz\Tkbfװ38,s?w;E?Οem#!-)5)XyRagCX@pr0\9Mr#FAFg#t`;^ЁڹrwF(׹P.zj'yɎNj(l!$FRiHr)CHIg A*Z@LVuTG{eҔh#\yg>j3 >L:f*bGd7}3*1EarQP&R/ ~!gŚ@V P/PÅZ l0{Ihtշ-wiAMp7ۼ% }aT<}Tlg;Nod&w6<3zRBY5⳴own{R<0\`3 7\,a2:n^eȷwAcdXqTg#sWgDf3 '{A<"!s_ʙM#`ZKǎPNB4߃¶'1܄w(%ͥf`;z~{oJfbi %؝Tw9@ $Нhkŧ{\5+uz޻2g%0RCSWڷpZ5W9wxIp3B(:JPA oo=o&s1}pUCd ~Y5 cC]y Iކ-ׂ~uWibvur hڴ[]|t?:yɬvޫ}iӝ&S{R$Rb#MDn CwqcqnLJMI'"/l|Lw¹% ?r9GHSQ25Fd*LF$jeQ4v`:E/@bQM꡻录Z.uB<-Su D&a=&P핔#Pdcl HPD<%E|'xxz;*z: "2 ҅SZBb̥ǁ 2N;BS+!*F]M<'uߟsI #Kg5@'QJ`&^)7k甲LFEEw곝aT~zZMG:rkv}@hDfK9x:u]b2ta8 Ö{gEGV)AdSʌc ٕ ̵FQN;1K{G8=8~L$S^b?$t'i$mA:3Mi+fxg rёhEz}l0Y6iqK}ԖIr~a+ngmk:6~[<&IEWv@ ,x4 p$ur[|Y\sꋧjD@9.1*%A`$֩D2k뼕'f#{I_{Lƽ /(^?}ߧ m4{plF*Ћ!^H<`hYD2EոzY^@ϋIތv~"NwZѲֵGejKiV{mf\xjta& 瑢+Br[cQ)J) a|O~D᧊iDsOM{<O8q[0I#6E(hX,kgf}{vl>'84 M[?urNe%LJ-پ]S%π`#hߚX={ zV]TUJ]Wh^@Lfr/^ʺem2˔yyW^Eґ~cg+ב*ڏӵeR$`Rdu@3M3R#ujlgl8yEyz8EK I8ќ(mARͺ~y@&GML8c獯s6!"8jX33 sf<`,Aaa;Yo^ٝz6;& cZ=,~D o`rZnHz^ .ϠŸ ^Q+W0\K "I"p/ڕ7oTRo~OޛǛKч>v7;?/8ˏF5܍J)>L\ D4J°(}tѾp`J-;uӻxm^-lYm\ܵR6n>GߖNt/W7A3@Qި| ߙތJw?ث)T9zxуCfNOzxFjzO507kFhnIa@ Ϯ"%7iQh 5(¿V8d4]48~( ZP1@",!)"csC y0FR䤑NpNՆ-IY1ip[FNiG_iJ3'Rږ_D$O}}sOoM3ĴBa &y,,b.XFwqsD&qnhㄴZB+ XqRT|{WВbD8G$+ x$ )۔;2ZFL&ZȌafJ6t|[ Ʃ 1qeݮ{UŕIk-wo2UʲAe 욮[x󮇣ig]r钮-myȕUu$ إńVW*m `E:c*r›kvUy%Zpr5oA,X}uMބ*sj /zEk6onnԦ_+W~gVK/jnnD9X͍|^ˮۉ{R1ʡc/00ִ`B+ Uct uVl_EQ( ̙%! :(!1%,A{ -rs!NpyC#z!@v8F$㑅H>HԔ1т Ѐ2&"YwFΆWï/$nѸ&ɮok /jQp ] &y4DD^q-fZӋT+2;y-8&PfTeVepr"dQ'yp v^?jGF)Ό]>F[ xj6ɪJ EI%W$CQD{p3s;ymoMֿzx_xwL %=|*enY.xX\\r>jY(p;}@I2$48 { !CHхrD!E)G + ;ʒ&c@ 9FfBLZD"X)uK(8:L8X R]HQc✓Q1mf#RV<"ʹ'S 7B^W-\o|IdNc؛~&0_sIL端WkP>Y5dɖ溩Uk{>Y%,B|wOVKk^/yWea}->ܨ[Are@&KJ ѓu3&=t-;4-0-(&}w}>f09x Eٺ:6SsI넂># T! :(.yU&: C@1խ9SܜA19IhNʜkEӓxc^wc* 9ZHJ,Wb +dJxGXdt:MWq 2&OBDJĮJW.Ǚ);V~KZזB*d ik]y*9 dɜ| l,n VWˎ1,ؘ| j´e1bTw( 咭IR*$6"ȱj'7kzQzw'k^XA_,tyk:O//gxAƂu{wŽoqvk/KW>V~;=nTfUZrG[jmiEю/2xsbcY4@]7ɐ]PB)@ԭ`3"dI.hۖ(™u!ſke1X"BM'6OPl-ʜ]4_,2ߖ{rVǼ+~~r+VwEFOp|K$o1=0])QMp<*^ 1U4b~bK*Rj*dMel4Ȅ)Y%"et\DZ9#v8r,ݬwt vxI؈6="PǺB;᝗Qږ/XUrU5)hd G(Tt.0. OFǮ(#oQDfA!bGPܥ`Y A!hȻ*PeVF9!EF *Y  IK[Xõ^ʜ_u`\\\%iY/uc\Ď!!Z C%,$xWuF=󡢃A+J),A`踸\ 6i;*JMb|b)!_g;$q̈́ d Usd Uk76ǎ[1C/$ 9#@M `v2pUur*pjKc+=Ӂ0J(p$ fׅYKĮd[Vz\Q]_}&\U NVNW xpJ&Wl0( \UsQLWJc;\CRVIlQMW} <$;\CB L8vU4[pUb;\'ܜ7~cŭ¢:gݕ垞ǍM'='_Vk~Ym>oT :b WU^$t9Qk >3{ 0pcb? ki#d誤]vUҮJUI\W%S*iWL쪤]vUҮJUI*kB'Bxt ZIT5ש,MFZԻ\r ]خkwY5fq,o02%kwY2;pƳg'[v-Ûr8p;Ηß; U0h]h;38 s(7r#|k. k i. 2I;4o18ǙLr:pDzK4- 亴eyӋl y'w iB_sue>ldH|G ȕ:S`QJ&W ^sd&"%L/ɾxgpkZ=fk_ei$!@`F("d}(XoJMg9*"J`. (J)QfO*%6\Eyd:Eq88j9()F6y'aBz>xrȌG &q:/;OʊͧO2܊ ĘN I`Ӱ h>Ln|Z!;%;"j_^w1t`x/ $NP&,2|0E7g.ҶM[E\DT9\tuT94"sJ Y1043a`:܍-|VY0vӳz-%U*v_zv谾=5y_띟7<:%q$"$C I%hID [F(iTIIPx..?=[#ϛ<$k FbXcM %.J)MIS( NPG&S麏'FWF{}^șyYacL;L ؟{(̠O$ӽ׏RLa ͝~鬷},hA*zȨ'֑Ix@>lėZxuI_ 1_q6 bt(8~)8bN0d+>"#;0 ƿY#}F=8=&ȴVxuw/JHK$H0 lzHtEs쇯Χ\9{p~qejw{6ixg=ܽH+ֽU7Uj%] xriK7H5rW~n롴l+t> OdfHhl} S"{4Y0UG\)X/EMxbdk6k@gՙ}Ihp m*=v[_yrQO.X($zM@XE|>f}ccՄS j^==e<#+s!SeÀC7as3 pN;ٷ_1;"1HL;S-~;ScUXeIX0ec-H b|pZoM[b*LJ3hY >gF5Zn8T\SPJguPcf<2MGN6&%'q![~q"-ׇuVM=wL5Zn]A撏U*v52W,#Q`6=FiU\澖Z)塘o!~^TEom>_]HH,MMjU"Z-=r# `#gGlyp=;q8wʀ~mg}TB3t2k`vU>(#3$JĮ :{!EUjRCӣۘl *aeB!;Q 6)tGjHLq $h#RU5HFJK9;`13S{Oj{*թ׾8h|9v/ɽDWӵ,{40Y&.yP.~JIn=:.{u??n;рtc'ɼXr_vcrd6?e4i;YNӧmofNlO܂(~6?n{|:#B_v/ꮮ~0.rmo& B?eƒ_o )Q(R"`E g{O3U]u[W7w5r]$=H8KO`6s>u}/-U4&~.6< ǿy7?|s_;ZqB=pBIm6k`郯nVHZt٭巶 lW {m0\,O矾;P">|ǣ8Z.WMRB \A}̦i~*KVUDJ@8-8 &8L$<_|Sꬂiഐ.VP R)UG>WN/> 2mL3h?G}_.n۹sa;Q?B)aR"6gӣC8Z}!AjM0Lɚۨ}qMnR[K[,iYL ~<ڱr JRex0Juia>*],Qf7wɉ p 4JCj+>7Z+.z=@y{X20q9`D’/*PA n@!`eCoNZX F8mW=\mQ= _|=gxEn0/WF$LӒ#OAh#zZxb6Qc*]'Õu2y[gaښ֧`g): tnU< vzN3j}7.bz䗶5;:jw'|_ ~q-iݖA&й#o.?h|/L-BbXBo<oG/2'v(GGŋixd/]B`RM&4Rl?GXv,/jݞ QH%DVDλ C!Z#1Ǡ I9s7ל(u),Y[7f(=/!Eh}KY.i + d/h %w)K3~fYhXwkQ(b p`Yi|mC#YSB;sc1 "O5^D#*.uqBRZJQrE`I"Fƭsf!K_& [-ٜ^o}p5\%5d9qDlݬ1-f8o]p1= ʎq<<;3i qb<:zDQK :<4,Oh+%ZMQ"iJ$7FP0O4 ({wMʌ̋eAI ['jHS/o6Q{^[ؤ[>}Ş/? y瑨mwkxY4u R jG ~wV^ YYYÉ ɱT@wECg n-(%&9~CiuJ.%%3f6KVe,YVg.r.{gS@{3g"6 Ǔv8]!F7.ӽd^9yJWujYt]r*Gg5+oҥ-yi]!b]s Z^@콙<'3_ȅ^tVtgXt {›KvuWW<+ϝyxF\ovty_ȁNj)..ptKw>p5Z}K9kUӍ-1jQ/5pcSt(*}^UmH_.>"{Σb\:܊śPv펬j Cb[YV\& QbNYzW()q1ob1ÀET# hS\Q[2a)"'T`ƹlڽ7sVcBs:}D ʽ:~u.ٛz*qgsVR?9oY<.=/J}qlgWU)Ǜ5E'%늪,IYJ ֻl?wOkK>"D\~@h}bc"PInѧ*!HBvKH['!a$$lHH!ܒdANA2cЊ;  Z 앀FnL0tcωE. (5BFg}D2zjʡS,E%r7sVQ3(\{U;|p 9+?pMXJc{lJ`.7HL嫏WmɾC jw:sݕNZ=Y NvGo3+Ww2<2\\f-ǁY)v\ W}CpU ;\MK \kQo;\+pJ0>O#pU zgઘqWJ=rXP 2T\\V2ΰRWٶY)' WZps!Pn^ ^͹ZBd:p )^01oF ~ճxt֝7L#N8ZZƔĹo߻ݒ r;d`vg`k4Y+o;L O%WJ^Qˬ}색;~6 :s v q0:I Q'4&w>j>k9a|ifbC [\.pH٧h!ܡ1on|JdAtrRԪ7mI(B<LOOilM=Ήao?~'sB#o``&Nf!K=]PdykEb P!oYXsS+ uf'Ovфx6jɃC ,6VKl1dN瀻P @D_kaW Y+ֿx*VB}O+4E*jv"+A WOQsayn 'H֝ K:W6={9M6:,}B'syfյ-&?pt ԰I]'A6 A%3x=ۭ1 ^4&Њ\Qf7׼ӾhIoἫ`o0Ӊ)B^`d[3d)C[g4? ^Ij BQ?FӋNh^٫zg=Xߞa޽Cg{ݙ5>F}+Zu ǣϭ5=v]c5=vM]c5=vM]c5=vM]Bܘ1)ڜx 禦ǮڽǮkzǮkzǮkz,,DdWhd09[+hd[c|h[[7^5_l5_l[*ꝯʅ4چFóɄ!dqW(Ȥbo`+;?ν}n[eS5"ShXTrRK)2΂%DYs,0{jąph,qdzI#o[3QĀ!\3U͜5@&;,> =p0u6F׆-pu0L;O8F򠯆]˄-=;BP<<;3iK7xt"@U #t7yhX^VJ%DrӔHn2!`Dh@&.gI$y蠤`5^)a],K ی\e Q`{n1!.2Lz3g9%BD~hanmx3y&[8ք݆ V,(|X͋`>D}cZ>n{[:G螺aAe ǹo2-*/inGunMEnMn͘ ;eY{ .DC+cm>JD\(|Jy݄&MOs??|ɻ] /J$):HxN2 +Tؔ:;G"yAgL 3~w$dsE3 ېd6~IL$Ɛ2ȳ_je-'ZJZ&5gw]fmn̲D $3KC 1g8`Bs(I QK $~ bwzU  ~d7 He DZ$"sbF0,*2)b>I"؝ #$֗6^)^LcSFT`:-R) +n'Mơf= WĬ˾DIzXy]~zFbI]h:F#Jc˹ÆDb0P`$&$.ը aX@'@2 ƨU1kQRk :R5MfC}AdHy8%*ĂC?LVH (0B4 * p8e*"fJE"*xFM7{,|UZ')WP(dQRLsDbcS3rPd}+c{$cҁj~UڛNW^:m+8Zw܁V!<0أA/LU+nlK_<sbҹV.2rsRԻNJ{736ʾvM*v8UX'b.'iٷafnvJ/fwpM ZGV 3f4j| Kba0TL紾:EwSs{]٬h|6dA|"1KW|(fl0wouK_ͯk79rvh@런5 EZekIˍ.>קߓr}[QRCfz/?Wj~[Eb 12%ϑ#$gXp.k5f#{Mgf&oW !(4E$ 1VP@˥H c:#8m:J#3mm-6}J6N׵5U2rm[TdWӎ_l6=+2/AZJ o3Dd+|r}ʁ{d9Q` "'Rpq@! ES\ʷ BO-TERB_@N] jNhBb̥ǁ 2N;BS+!*FEF'ߟ;bl#c8xpCCD)1xU:ܬS"0J7&{5iƅEբFf?P2')EM[v]@h D6/5IF(rG2 AsrH t:ٔ2 D(i M3ҧkm1Jc~:]|NVF|RD'7 R f)ou51Ɇ!BN 42\_?oїR@?gq8.U! q~>tq0,'Cԑ~: ?|;Gnp5cν`u/;n;e-uwFso T%taW#N/',ܦ詪YF$j.?i]TlJK4SatB>x@0y6h&P^qi"Qhc\o_zJux~F@JDIed9،1zċW3q\̽"@z\#=y>@ЗHR(Kr -.be|~G/%E+luPTdbChEmvo+eqójeLK5TZ+M;6~=zLP5*DW3wKشՀEOf so" ual¨fnbn]mF9JCLKŜXiha" 0A1+p"SquJCPh1ڳBg:?j ~$53F5TQ C F#x80e:q z^@Q\K"xЋ6=Vp{RJ i{ex7W5e]sއ)Qgz"cms4^Af;{7ܻD-s|G6VڑEz;e9i9׆p,`fr4u& Se;*۷%LOhyE8H* uI5e 0ea:RMF}y4%Fc#E.7^X`mɿfYz47<-}uQ5 6)cLJ+戤>)9_:1l۾\G*< *TD2'Tf<XHi!I${AS屬C0 硟(`o?.k@5=1);RNǠ(qː&: SDN b;a ǾBdJ ic$bDÂ0VH*D*nYAl\2]N`0Ê8A?XZ"1 ?f%кē\3ָd'|-ʟ%y{)X0FS\} ~kԩu%\'L8@:i엿)Ba%/CVq~"8nٛnϗiVzB?ʪpa@rȤj"rtΥSwdFpe{wG1Qz6W.\N Y.`x\Sr[םnYaXN ;չ wٵ)@hQ,oU wWםr,ҕ* otZG`0I<[a >lFLf Tɵ.'n'#pԔ+ۏE9Tv9oƅso:rHՎ#] C3Z`Dfcve/E-uu־Vd&$Is`.u}3 f 6:g9[;jIxײn|>|cˇ˧aNrrq30.ԑ IYvCq3&CSVi _g\+׌ٸ1u- JOo}ﻣ.$ڪ8H-],nR\}'_kN9\EA_QJ+_@"b4őz8xn-ZKEmI6n" m$j08(i28$к34QxK%Ygŝ  ck{bAJoZ/pF J;'FRΈ L X)̋ E݃j35S8wcJ5v<2,Flp9ӘWĹ;.c}WE41|OPS !<{LBa8mO9ingY ln^YbΏ'K2v& 5yU;aT1 CV^)ϟfvvߌ3eXcSRnVKOAsT%炥;u/6}YGH(򹖜N)bxЄ^xW;l.G/^h/ook;5[ikd5jaQ:~*!rFsY04\`D$/h[U9l2гq<ͶPZ5=HJb7,`BNR/pbE< N4'JdTxH3Oi)&+*QK p1ZU>Js~0pP*Q *[cM•Ldӥ;頩Vy8;;Z(#9aǜ#_Nˆql`q" c";RR)TooޜqD0Ӊ~";D*UkEL\.KN pT7Wc.,ᜈ&rZ xp(bxޠH][o#7+B^vضy)C$ v؇݁QZ˒cx?%M'gH6X+Ѧ1,8!x1-bI S2NkX/a}ә7,*נ"7)d!Cv@=Aέuqy29~ǿ[Įč7zxF!%]nZg(|3+C,$i+e2A#h2j /[;h琞$/Jds^"/nK3ӥ%fo[Aw8>Z&Q=S84N=SO84N_ =ϕbaB4N=izƩqizƩqizRE4N=SO84N=SO84N=|2'\}yڇ&$!+tqc9K0 Kp#{(t|Lu!BȌF!2v`"d*0!"yY~9rUwT݃xhpeи*/ݗKۻ~J^{TI[ 2ɧdZ*.0:(  V:E=v^LKc+P$ikYko]c"i'W)1-T")Ne#1)#(R6+iJ˽Oǡ?^he#ed)&2Pi7"ԙOF2cCt(tHu7RWgH6nJHvЌ"HsSh%3f0$6{1Bbr8fo-kлw0+gAة ޭI~@7O"vI F3{Z̛ں>S{ɨj;34dݬsd0IV;Jo^ɰΑ@==~nTLcp<<?::?P~~OG {};:qD+0Kd0tM'OO}>$=qho9}hDͻ^,.ښ1Gތð:r $/PⳮOKMƣh95"U8+4mg+&袝̊@ت+>ӕ틗L 2ּ*TdiԎsZtЅ^#ȹؽ_ 9Wq!ϝWk7qN˹Pbcb7KX/էV+9뀱 xpg,^Ao\h:( eb5(*Eb{,,2 NRD4Y" KBDŰrZ 2i!VjVᕧTLԍj؆SA>;@&/D >Galʌ(hVJpP gDY>|B1yKodoF-UkJ} %!CW>azC4aN92^)V}D>V֏_?4؀Q DC 1u}Y}-K,dA>aRr焦KnI S* ˬܥ}!>P"f39fNSnAk?_CxvqݒtЇ՟_oNx2]>RcH YS&EcvK׋Sx'糶OZ%]\y6-q.m}snuNh$(PZ_Z7֛OtjSz7f!leݻݴzi󅱲Ck-7C oyusL="wc%Anx3~h.otWF7m^ҷ6sl9sЦHYn+\)ǟ kJU~@(Dn' 1VКWʏ;˖e*? ;:Lg\T*f0%dsʘ|ʂɵ,Y`FB0j4N FJMZN,\[k:ۍ%d4 {ZC^UeWKt I +,lY %2#D"̘Eppk۳'֫u;evJ!Ua,(ҐJBT@:\iU i.@" !}2.ULPl a X#SdVX)bVB \d%g}=KM&"l )H B٦\lە&r4ǽ^Cvb~m(vWrZϘrJ{:eip445Vyc k>T9Լs,7kdt›dYtx{7[j8;\m!$.vϐq95\4l RHJktOһ1ўΙтc\+2 w*piI}M$h\b}4$3jW+:bj M>ÿϟ?ƧոAuޗ0M{R^t [ӅeGho>n}^iO4*oPJ0\Td,m=mvz54q-pԆE`( _rik!k@ (L v`m{ǍgEv̌Q{|6^(.H&&0,KGJ))@xkPDg_nvROh J'\ <,u֚{.pMp3Kji! ԭ'JsidVҿ/ E!0B*i-T1j3#՚/Tpn>&',Jjܟz _zxj/RvΓ1E&!M ,s LdH'le`%BKUjTՌr{V<ژ%zR(&fEm@RSjkj!V I#u:;x ]X^ o. 2>|xJ;Ma48MF×z~a#s 1LLfd%MV$_SZ{^HQԦ&h p̺,"sHckSZ՚v'fb֮&Ї6e[FMHgC v&3PD cPXZz˘6-t ^2@Vtɐ-Ap$EYEȨNe+jٮ[] x7|pI"{#. RtbK£imrƴ]ֆh@ghBIs#JC(nKԚvxuQ{ҋ%g>:Ic׋^$>rԉ .+<ٍHx(+G4a+ RJȞs}чդPWև[yVMn6яE? )D81-0LWI q55" `\f iQ:891R}Wr[Vz.-<[y>\:+uo圬ʵ?l{JH 8)œk|k+6 _\@S7,AJMmHÎ/oC. Arn$eg%kHjIEVdN~*,@^h.w=8R1<,zHϩ3t0ق+㵴,H%f^ȳJ<'uXuV2j T%>&2' F3N m$EUޟ&Ζ439M?|l}rak.c|R]e^N- ;h],\Nl?މ H$ 'Jl$C&AL`%CrRA~00`Jy#HS1'5{|(/!5b2J,L]n Kb2>KQ"% Gb:Xp(wzkPhh8Hߎ~]LšG{_کOlQMF%t;2>MEs<=:鬕kU =$b 9O'u]O[Wj+ 0G]`҂zV3.t8n fߎ~IgmO4F}*ݪʟVrFw TIjC!k) | FBtA^eayCЗ7f$njuo⎑(i$wn5͠)7+s1=+iW|^Ub;u%uu]k5WyZj;/bM%.~sut1x\k:fNtEMU\#Qoҭ\;7yõt9g{Un >HҧrR'ΫʨLNy λibH D5/Z *xݮX15v; =\JP(›P-d-zY[kxɰ#ۙ"B · vɨ!=߳7yS 9Vyx"hIbV#Cm2VS q V^!78ߗ1sP=V= / y<~}q_V:;lVoG~Ia:k7XȨWoJRWe8"SW-߿VZ=?MI \o.f^Vvz/)YGdI=X ճvɾ!Ε37׺z9پKJΨA! M>a. .-?Nꖿ,buZZl]oFAoZ1*'\z:s@gWE̡WE\ǟ \U\@Z0xNL?*v72\iw*R^$\YgJ{[$tz>_$GA6n:_5i8I9O>z{[ .khv(}4ʝ9vw/GI6ֆF&'$6vo^; 'iNR_:e\Mf^V{b3=+[s}# #5fVə"'e= RNj\#6b'נd$mgkhj`;ԲQ8Le)*ScM,)ȸC"? !X B] Cl 'M~trT >r-\|y5gl\{5V.9KJ=<;:=_3opLf '%F!2d|ыeu`܁݇ǢD)z4Jچ| xH06eFE 'ZdVJpP gD]NܺSdY)_e:C&+XYkC+6L׃F}D;&lچ%=XPy"#wIX@s8^ƒu݀Q DC 1yޅwSR*0h`(QJIcd-9HMBoR”2-w)q$E q qYd.s=:NyjЕ`5q Tݧ0|1g&|ݳ0~Pu o_uB nTkα.G%AC.ooen\u7j y0NԷ|[$o7Գ|CtQVZ] MSseo:MX$l1^XY|辪ƭv͟6XKgNu}:q޹;qā X718la{ldo/T`lP!45.*)+5( A)qܢBmdO25@K񠺌Z/,]=tƇOյM\b䨁 CVi dSRa. XtWb1C4qqtrkdɩ]l9AHJ\M|' y~l`9):Srخ8XOdY*霫nYPLp{|:_>uA2y?*D" b(b^jw| R ''R )„Q̈L[|rKB#S%]B dHphJh7JBU2 ҋ='v .p|dy!UD# T̕ h5q Wpqv$+yxع v+>6"^~8 4j ؛sX%1D|Us"M}: ԸoazFD](kgyЂ҈]% v(κ-,pǯFT2suz]3ԁw=Xތ\co(^ k*J\5ixMEC4Ij|+%XWx4ynK  MJ<OA.dc%G@krӖ޴r|vJO?f#{Tr_̋os?sdHr 8tӡݯXsÿʆsVNfdDUShYډ?NAIxљv:%-W)C٦\lKDPZ0:G:Vvߧپ2?Uڋ>r9*k6OtOԎH=28u6M<в5>+r2yCXn 0x]>&edkww֯tq2[?ތ98\VȎ+M~xvsZckMAVim2Iz!&9QX-8uHoLWXJz#) W;O)L/ƱEyFih3\hAʢyIkBLmףoS'G߽ }76L/'TS^gTS^"i*jw`Yd: {xM ,2w}~.b͍u^ȫ)!dmXTi0{x;j6Q7kZCրA}Q"ýڽ0 ? ֣*lP\&&2,KGJ))Hx~CY; d_]wӧ|܁rY|E-RPē瀊 ܣޔnzI%-$21g0ssM]H*[I+^~ E!@!H6I[##vh3 "'l2&2M2Yϖ$_7!2uC10 2|{6#@_`ۉdNB?%F)m{gIsHJ< Qz$It%%MґF̴.DjUSv3*pưڭ4!It*RZ9eDeyц(ml-ȅ4ΪTr 1- ('@p_#ȁRd懍J$T:1ƃMǶ4I[ %C gJ{J xg-@Jb$ڴ @7QzHL|t4U&MFb撃[њg=Gz`-Ѥ!:Òm"o//|q[ϘC" \j z1-W`.@FHTh-__<~vlEp˸=K>kgUk\HޏPKu="i@KTpc$H;0AXl7&━!hJZpBgPe O1-XoP˱!k)%n)[c1Ar#͆`"{&CDduQceJLIࠐEd%0`Ya8T޶`9wjiJ2I]'1SЬ\x 'yYPRm#E "F}ՠ_*ܻo\? *R/lΏ!BQ盐,'YIG豃e"X 0,EPPʊ V܍СeF > ,c,O 7Fq{jxk:x;AٝW<+圤I@`X8].֕LV Bz>*8Jo&R$7_`x }߽ Z! D J5cc2(@VtVsL5WG'ss%X_$ mh8M-ЪrS߶QjP9#g͍4̯rŊB럟dfh-1ɒ' 7o~Άq4aD991J1e {{ykTx^Q5n,gB0Go%`3I#6y(h,78a. چ=c=TVNBh$:$ך2 .Ay"H!mY'^$؄\ y$,ViǬQFYJy>%&zq@l؉1p67O0`̙ X.(NzX?DjO6+BjeGsδMu P 5 6)cLJ+戤>e)9khb|ዩw.pZ4RaPFR/́03,*͸Jb,"Byz7X  n%c?̼W?6nTqv<2r8NGISf= S"`AEGAmߵGCMR.P"SJ`Lu 8?$ #"(BR ᖐ%m@J!f-G0ȃK +b!BFjjpJY0.1KgS67uV6OWoaP>y[٠|;jEpcY_>\O޼zM}_~{{&_^} 'Ο`ƅwN]໡͇&CSirՂo2jr5.!z7nv-O/7^%It PNMЈIJ3?y%+M\E_tܕ/X"hGB9KH-3/MKGon:J~p̬8~:LW 2 wdu@3M3R#ujlklxpcÄqObAlt(QΉ3bm`Z "j@P گNglT12!t Y5rs)N0]0Djݙŝ#9jmg)k]7J`\C۱|<5ͪYGH(򹖜N)bxЄF.qXԃ-dA :6k=YǘS C!fH >6HFZ{1b R;Nj/ B2b^] nz`6d>"%0zڥ+ i8߆E.xm}7Lּ`_fCXo4XtG-ض_{ g gSʍQڊT$Ra1T7n,"m{;6oP_CUQ~B85qMGL˼yMn?dNŕfU^[Ƣc\Rcbo'&1x/LV,$6?gnQ'H]US4jGJ|IQN)93Z {#ɝvrl;ENw78Ŝ9s˭u2vS~0M;>s*fu:KͿ l.WHCsKFD§o-X[@;N<."]㫇N4"sZl#@j,qpTy홅G93Hx 㠰Co#[x킴Yͦp5tZS;g ]"߶&G2R)m\}҆$T9W-EMT2Z9-uI<5 zi\VM:$U JrՊ)V4'XbIrO$Z+ ; 80B蚭I9?][oc7+B ,o0aCY`wAV-9;}oH.lmP$6Nz՝̧F^_@x/Kj b>4^jyDuXuyq4uXY\tX갲FZ4WYs|4#2,01,2XL0KnYJ-m3_ 9&;8sUEw,J䡛+ Wh#2W,Du4檊kı*SnXJXa+h1W^8|s[Gcx4`vJiz9 ֿrh^ reHY+]]icT2WP_7b􃨥^N1_څ,n8k/SK_*/*ˢx>vhIt9V jVs.R9c@AJ-r (Yz7:,oߌ%e<'╺>OejnO7mޭ_)WvUtωd\}-o8nyOs˶Kx9֣FF(SeJLi)2Q4ʔF(S^fC_L~?AAs'V[?~gs?)W6~ܣ~9~S6Scxj O1<5K'ݩTڣN nnjWX|&*ѽCJŎOɸ'{"<lcP`QEIRžWHkKDሊ\u(-ȱԅ.  C="Bw!(4$mɔ``JC%jq4 FM >KCݖ5hnn^;ӻ;0Ho[2k:B,cTʒΈ\^BvVIhMZ|ިE)BD0Z՚‰lQ"k$'+b,%1%AwXD@]A;7$H">}|_.RVűgBG {t4>yvfsN Q0zӁsC &A@"TH?5$Oи1aȬ9Z+A"XP+)Fko!AoZCDfvHX3϶W$cy=gBAw/ <A=l{Ow_/9=lػЅdJN l:S46h$c%R*$) J[*=D%_amΔpIs +E2^bM5."%oT%I*"p0qv<^ؼxКZq:xgý~m:[5{׮eF>. dpgEevK׋Uxg}>ŵ(ldwFm^,㭇ܺknzs8'-^B6f!eݻxo|x{\z^iYΧ -n}5]t~:)JXuKNjCnzCYK\tk2 WmY=MK= Uϭk S||aN'n|pdz!|wK:t;RDPKQRh~|ww'/nq(Fj² `IcEnPoZkc~4Ұi{~fq$?ƤG^ߜ_q{`,]O:omVpS"Oa7&J|P: u B2sNF`19ސ=`\qL|~g06@`쥞U%ixr ݄(IۑV?B~3``O/Qlz?zJ84p?]NxFϯnG^h4Tmȫ7ƫE9jH3ۓ IU|qIN,D-3|N}_<^~kѫזTо]W[ja_kZ+랏3Uy/b+6 ~8 3FQq{x#5Byޕ|r}Xoקۍ6d[̵W gN;|J i{KWxI-:|n^T[,`0 CB::Xb,]Ų {b9}H@o=lO$E+ }:``\Ak+AɔsR)\@aDP(5sS2d>18]n8ph4$)'C %=xt?%8D7Uq䑦lej]wj[ݷZ/]v(ˠuPxARB`$L‚QdLv y'!22 Ҽ󴤜[Cȑ JlRdM˒H8\LCPnC6SiDϸ5X!f jֻȋKff#!)L-3&|ECniB!Rc&fAںye([y\DI8p9QIZ ^o@%U$9vHsLzsaB1b}咭IR]j0%V`tKjsXݵ=*/z}gR 't:U6lg:rƠ?"*;*B}i>>J)MO sBWyIAZʲ>A}1N>QiUp6=KZ󏺹vNdN;s2Q4Ɏ}a :ͮ>DCfn+C(ft)O3̐5AQ5g(LR *).Ej,*9BwVcZgӘ2[Ied<{{4_|?ͣ5?~yt:D}|' nW֦j!{Ӂ\ɻd8U?W.qc[Qȶ45wWhe4ujkĬ׻k˽ډT}l)xlwXvl4=ߤ4tACN0,:b t$e 9h:r9,w@yY R"@& %ݱB`DBSH֒DrAr]s>-s/_GrCZG$m>EY"i #Sߤ )(n0.hQX› πc)YSQт9dtbCJ %O-GfmGϵ]~5[zoE%g>Õƽᣢ*oq AgGB!G'u@%'DA!{ɤҵ~٠ASI ꎷq.%MO5ۦ$ɕ,d2ˤgC[!z02-L3h W+S a +lᎶZ[L>~ޠӗxb]Y BJK$ M°DBNcKVc40,Ar`=fg@A5hS%#FoJEDfJ;L'i<=v4qCH,F N0j]:& 5&$D&YrI& 䋶>HD0FBfaf`{=)EQm kя^D4 ?qԉos|ױS0D /c+:$o:Ot "In/*"]~3\1dsRF>y᳐T OFřu`2Z0i.k osЮܱ6LH Unp5} v^4+i~6Xd \}%oUw#0r\/Wcz{9x%a-MR`Qi%P^E *&Yt%{P|dJƔ~Q:}85)3dzeYg=jϗ;ˉpĆQJ bLbCr4Il4<R4(Ah "*V`^A{UjКa8>Z(B"?*h+# b|0E ?{Ʊʀ/Ɉ}08س9Fn"THʎo\!)r(İTTU]$<m ,Rg2DO`.jLmA8) ,s$*t:#g^,LKM|GzQZR$-Vw"W/X1ǖ>3W<#ˉKP-zk9"cc+PF")Q. {ӍOfW=<Q$%©[ RH8paSiGr{jE41DH͓qWGcš>X- yHjUա8lfRW¨(襾3? f\ a}-ZdvSpH`hQOE][vv=0/uēWw7]gƅe M(:# J,9ˢC(iх^?]od@:A;y`oҊEXZ)e1X ̵FQN̓t uLx\ѸxrWT+g)ϣ S'Zs)ZcMB *P@1%{?$ǡ[ٿ'KʾBӯ8_g7w{g(']M2D᧓H]=q9 \jƍ{xsy6t?,3] ?!hic[ͬ@܌UD%!\(R e}YFں$j.ߍBG*%Rt bO XLoL<%O'?=hwEoA=7շ!wh&,kθ-a:~wa0**_*SLSnx\G{ .CTU:bd3qX ͝Mr>4k)^ilp{6Am1v C-4ş- cտB|~;M!׊\:3 T+(LFCtMo5L.>Vn*plz^y ocұӒMT)!Wnŋjzaœy튆bvz)m6ꏡZ`+TJ$X$v>H gDRn:o!dCzz`7#/@v=i+h@lMʬi0B04,"Lj+N^@iJޘxkt ^WqC)E?@#_(X -go(䒁1_?l (E0}asỴBO2[Ba|T0 qRٽ]NSţJ26'-rp*CK Q6UF]4?eȹ f[yao4G\>PdI!)=i*LV0Bz`P|5w&ɿf6T$on`{%1w; K3ﻷF-ӻQh 䢠L >68P+r-W}pZ0o*p XGha9M-ЪC㦽mj*'~Zm7OK^*m7iVO*`CVv~Z{d4-) {۬\o'&(2h[ν$nƷ{R<*xTcDG\ԱD-U TWLj6wir'~hT4{@,G``)fC_ӏ&Yl|3<84)V'Oo?nGN*|,j:QUKT\ M+%hU" L r*Q)U@u%$hΠ&z<*"˰&C'o~6{SԤZ dk1AOnCh&hנڥ?ɾ'\+yZ'Urjg3@xneׯ BWwM9rv>~ PD+ImbAɞOkADGh,KСZ}PMj"1[d s<ɾz7bή]j vsv{,|K>kƯـ;ߙ;r*8C(RZIIPH ,r" &riVm5s+Q@ bs,惩\sY&.`vV\_ϊ)-|LIBA< v:l?lR4j8I;  :]g8({q+:߽ڨ k/욊 $ TzwSkꇍ~vɿv?w0eR{`g0> x;w쭍p9y4P#񅥙Mk6dˋ"'ѣ +*G뉙'yָŘva,c0fkzyWwq,Qm&σmicU5)=&_D0Gcc1UZP{S 4U" z߱;Or]٬P*  /*g erM98ARL% %[$6/8\sY<#(F Ps@y##E6Dc DS#D붰O1F: lp6HF7g)Ocb1VَS|tF}!''x%]ʑ>s@bָ"~>E]={<ŃJ=b,vq^&NJ1M?:%lNʮFH%7i(غX?p2C] .HğgDL(g`b",!f1pBD@'I'8-VJ$cBS2IM(rn*֑aYqGkZ uֳY!A"$jʁWI >P#BAL39_/usZ*su$EhP{Üjy޸ZHuύa2课nHؤNꕟ?(Cқ[om.SemsWk~:ppM;ancMs8˵8Q8FvߏPo?ÌvߊmwM:hʻe1#5]6Ca󹹝m6Jp=7uvmU` "'H519&$ki4CЊ0b2ҩ;y-Xhm 4wFm6*GT GPB2Kpg䬗83d r~h嫷WYmҔl+z,KMĝ[*D aeK?ց g"#9O+O|, q ' #dB0!`p$IXPb`HAb 3S)RN-t1wa|`YB7zfL5asƎhglQC*6\y{)<_i;Z?`,[S!)6 ǫ/W5" SbD3$*>D'j=hDe#|mDkp#'~WBҕz^V7Y"X9`Lh.%)f%ۭ}ڤM$rI٤/c38RHDZ{=C`1,*2)b> ݞӠa9$LxaR[u#v\ʥlvn'sYo&5_oWDp g̺!0L\80eѵַ7s\}כi4[j<,H!A@NAjʓhBRH"#TuÒbiBLSY&kYDz9z%< 8 q"r4$sS9X\hUXXb':㜗)dg؄78.ܗnC9aͱ#2\*`e3c,JElHqo"XZ:,*1C1RED%LJ"ΑB RxLw)gM(2TReTzc*㟵}zLцyŊ͠N/|RP,Rr4W2Xg 8u,![nu貎+}qv8վ=VoY5}27X*v8~i҉ua=$jLiϳKM[&&VQ<*!zƌƁZ 1b+t6P_*ƞ L ku xCQyhTp_mȮi٧0f.-`M*U> \>xvqoit95?%jOD`xl9@d.fPe۰^Ídrwiپ&=xfq#xϓtJBf?߉E޳t3&wV5m={!GlO:-Kha% Lsn8WHD$PA9#>cGCd`$Xd,[L: 7kun>"5rgczG"j9<[!H0>fqeٻ6c %C(}07^$0S$S=3$IQM>]S몚:rWjSV9X|#~/',C= ;f8hpt#E[cE ]F6Kabdޔia 9N`.@0& T)q!rEnA (.a|T#2d? hS+gD^J>J*=fwHUBk!4SHKEL+jB(2˖PZS9JS>Z[(#'%}Ԗx^`Ad̀r#c6r˦dlq &CSZ, NgI3n&gh6o~5c~7Y4ci 0 EQ%4D΂*lC2'`ZBIM* &/.i,bEt!"'Uٍv8+&橠v6hQ`_r,Q!JǬBsZFĸKQ Qƙ7mFPyyKG.qV:fXю Xr0 >0G:flҨOa|^NsǡH2#"iEm/rYo>Xl%URhO;k9 1k$qeNk #$$8pFBb%#1  ",iK0I̥_r\xpq2o6pJl\r(.̸Z\lqq[ϘC" \j v1-W`.@GHRh-.Oqǡx(2y+G^ko3ɶ 69%2'ܤ s2L;{MrD@R+uEm-:!Hk#m7ܲHo \P0vxDm~sVk/ԃ[(1V6%OK}.y<[?>v5dajZϤi MTÛJ.Y2lUWEb3(W Zw#+;FDpl]\A듫gYzױ*5=8B#Q1a5 "Y<<16(δ)1"ԡ0L( Vbɱ,YGZ8d@:A;yޠK+hVh0VE9ݜOox W1@ ēЍMY*@`53,I1wW(Ɵhg+*&)xx]|V~,۹v{8t! ?weѰ>qٛ^^/Y{'rr`wG/nUި{;8hgtqKWfUTASs]Mhθ4MI^S~;3ҦJ)| y229yJ0,Nl77'~?$,f UCCV(qc/?JuoF@J=˧_.72Rnx֑t a[R՗&bd?qCpS8ͻyxjF{%/A.sIٸ t0]Nj֝IAv+/Eeo)3Ч14ath?H7_Rp9Sf0ۊMiV\X­klP/?F:SU)oʦ8heKJ1焒7 l[¤Eٟї{YӱrK [ս)o ">D3O/!?-Ƶj >o/I#G8]aBI!l$hůϫe@?[?.ޢV$:\vZkl}GCJzq+Uߖ_;Ao:y"[6f6ՑIgj?ݳðdǜcws{7zYZ<2Iuy"գv`OxFO*6QUy- 杼>/m""[uvwnn^SAmNS+mRcR;$ `zd`܃ɉ8uJCP>trb !n<ݴDGJ@ p֤rF *D"”)-eā.^v&a:Jl O^pe;R*wqA[\(y23AH-cs;ژQ"`,p$v4g]4m㔾G+9v|L;ɹBIjW ,7*-]2SJE'x6Q_y=~'m>,9`pnyJjq|]oMg2Rf72Vtq<%0IGGl:a ;i;@L@ё̎his=3zO58z'!$FRiHr FBBkc A Y%x,3FΞvMO,fWC;?.4[/ҼD: iJ޸6< Ŕ?`@O j,"lR/(TV8.1I}mhSTs5OwŰ  Sα f:/aQFǤ"9̰4N+!I$ qǒq/ Q~?յvP'Ο`&SM$Hh~~1{po鿿qW8KzbֈzYWu7][=8rKS7]oyg#hY`•z ?JQrz)%&@)]awKQSzkm/p_Ο-M7uK̺6  H`fqPӱeR'I#u%g6iJPc[gÓ;))N `J$k%<:?M^l_pl c 5-뼨+Е~]E56]v?bT^LgsXxE;?`=+[Û^=FHQiwכh%炥uQ?Gcgm'@:"ђS4aIf+n{Lq33pmyy O[cR;ĬW逰 ߜ <#K0Ui!MFzS*7Y@hлOP,O7}y~PקP* )K@pJ(ֿ \|gp%=bJoryqnɅ ?dB~ [rP+c, hiu6 $Hv4jt7~ڹfPBfwÂK߂UJ>(䐅;;LtG$z&4 ,u')VD:DsQHI5k;)8VM3γ,m]~ ?a^,"p!Tb ̈k,<>*ϙAMuFVx޵5==!Aڧ}3/[{jD=s$rb]"St&%>~`ؽMVg(Aذ;~}_;G,QWo Բk~gwjZ;R,&d7Ӏdb&ep.HyW'+v?+&@dpdw ]r^xOёr TAR2F k+3n$lB&ĒU%bW]b@%ͺc*YiLYL# B [nlmi2Sڗf^o!|WWzI5'#|%UOSy~N/j6S߁n(Ud!*oVz+#<---JwFp t6p gWM3K<\5+G hgWFelઙ\JkDYiW?!\&MpFp lJǖx>\5+p3•fg.ܮ\yX zZb^Lpy`Q$_27WˡYTn34h=(wܯ 5 zϛd^<2_}_M5\,.{{5+!.2__s'ߝ I؈۾cB71\_o_-zpꍂAhi,-\pK Jqi'\Z `'s=IРq2 3~T.>BH⌆Ty]uU lLFBJR5[Qɣ9QxC6odB׻m_=X_NY7Nc g|S.fwDFL0*82 lb #%z%DҔ^)aFI1h]SX(GpT%e@)d@s2ItѓNؗf)m=a.yC1-k.mKW~%$8RW> ;K>,|Q"qz@Gu`0p*xbbTNUQ daV)T.HpYe!jR-:JXgC)QIP&hlU\զf+`b Sۙv3gOF7%6kWcCd,4vӽ mP}oǡ☍^vqLj/꽽j*}(fulDhQ?/bjx)da9zr;z?f;uNx\PF2rXyvy;_=k#Ֆݽ9W_3Gˏ8ߜywwCm$Aq{ aK&|%_@qMd=ssWٵnyL7Cr#w*MQ8f@O8oU&0HW_ZšpF/6a\MfXׇ݆X,@P~༁ 0E6c['=_}YNWy\.Wm_ޗŭkrW/?-y}o'M~yo~}]}5l~{܏56ؤ{t9gq&u/o][?{^KW!Fl|{#{5Az3ZdYo:}.:ZN]gY쨳[6#iżt8]h:,| G} $4!!jC Baܘ*[hOڟP?2\JeJ\iqʰ3d΄Dch5zTꭲOjjT;OfK%d.!']S$Vk5GJ[v큜FFz,@]- em%jH73S.NEс!=m%BJm *9C5R}gfQdE% |BȹRVI"!I}:}ܳ b/83"ˆ#"nx* ^bQ3ِЃ CR]kCɍ ѡZMv<[vHTJ& LtS˜݈}[͂fKEꌋ8∋<3#MQR- t \`c‡*1yp1++ݼP !SU>,yf6b( D<+o۽y}Y^s:9-rk6'&ۑiӲZL>=_]TrEa$צ ؒ*4{89W v?1]=kp`Vlrգɱ5^%CFYeI&e]ZB\Cuwh|Ni:l].> e5`.x}{7nHɢ7Bzm_LO='6?-a)Y/f튓JdQ:&!7&S͗Wnzp~D 5GaҁH`6Y@kus"XUp}hؔ3RUֹ-_KtȮJp$ YUk’Q :C7shpęGx3?ȻM5vy.L>ӫv P ;dôhl;`I>o8sƁb/ZڐsT"lU) B5d>U<}}g#(&!B&բ9Ӛ\օ,c Z{ BeG<4m~3qa;xqQ_8R$V5p R'c!%Ya&{}َQ-O¡ ݃~ņݗ  2HQn» Rcɿ2З$ސ}c;Iֈ}_b}J|H%x}g8E %ҦLOwuw]Ujt xe/ A.sLJxPL&F+X/YGll|$$9-4[@]ZRf a6r: oO`1@@ЎƍY*@Go>\"XX`LAݐIP?dgM nd}eqпήTAo vy0Dᇃ(ض\zjn\] d'n0V++)]QAٯ/fQ".#W2R%Quzu=R-K ˇo/7I ˇWx_܎='_S{#f6P7umAozHs<6J:U؅5PB ^~UX%4cE ֌#l9sS\U:bd3rX.M >5kڽXʋw טl^6nH{si*_9 ;VE"eݒ R٘9/m ۽Z;uoc5=9.1*%Ag@{ }Hʍ^[4獜,ܓ~y <40DGJ6= [9% 2īi4 #S^#L/a07R18j Wpe%C)[X\HJ?Sh VZ5YC~}qX +:0I!"T*Ql{BYzU`ou>C 9?JB'DmNUrf%A%J󥎿0m B~9]U uw!RwڝaoDQ*r! U()ew[lvew&r>YIpI>3%tJ=TߔySt*b)9QE'xT`KC6oON*oeaa}QBz Nwl'+MUDOo`ŘHsAo f4VVfst: o&gimԚل62)si,p'pǛ1sTdQI8-RtuǨ.S/uxtC;'zL,c05%ܥLiNQmQ',׭WsG=Ǡݻ693$_:޺>o}#Z6nLLNo/70`X3mΜ&a ހ٥z~mN{&$m+XmͦnS5V{Bh$:$נ #$60dpajEBRc5GcS;C:ʁiIMՐL1jveZx5^clh3LU|e+a. A3sbzR`}!MӫdlBf Bg&:) 1Db& f sDR%up{hD1 >`$K{0Ay0DJNcRd 3âҌ;$(" ' L= Ľ4b?Kcĺ [AC FI;%5 z"i"`AςEծhz"Ѵ !EE!2d 4PDZ!g`D!҆J,!m6W 1k=@\"fX< 5R+0SiÛq!:3pyn䚱]%k7) icMKM,ݾ3:T2~5l׺.+ NSv4U*?Ν{4UDa3;t}U0U ]*;̪2ka@r nPB J1u-3k۽m G[_%RtvrJpJp-[ǝEQ0,>LgVsj_LhQlۋ .tz]ߴXfZnV0BQIN 1ȉKE@BF&?d sZgeZW-߼-^.Hp\F\?;\\kKlm#Yy<*ί `/׍$QHnR?fY΀h} +X25Nz8szFLnڹjEfD2LN0&+jC֏YbmYNZө+_gss 7^}>{yw=uWgo_V`VR?_zW?o>&=rho24U5"MκZMUMNfCglMEnA@~~_w.eN 'e&5gVO>e^g;ejT!e|# fo%+d˾/M._n`w:'8GsaZ"!غ嫎`s b Y6Q۞߿ɾ}ӛ;< MT&>y,MYi0N<޼AJF!]gOQ8)OUHXd֥*< &XsdZx| 1=$<KrťKF]z_KOQx"?&MfV0#)6>ɮ0;˜[PE_eJoHjX!eK,*jvt7ႼޏwwPOm?D3IقM-LS͝:ƎXokNNW(aTލ)~} i0ͰI&X2L~V#- ;gxʋO9ro}o

YXh>Y@|R|f>4 ^4\2QKMm5uHYC,3ާ { ѵ[k@rClL姕ig%?+^W2%Nœ|Q]1 ]L> 6a)u`_Xo쐨#_;{\/s6`2VûҺErAn&AեOō=AkL:**_thv4a*7ꋣFDG=r*8C(ҭ*SBy" X"'0de`c꫾kVxNH>ހBc^o`1.Nh;uۮd*oav}~>]߮ov_}d?~T\/8]/7w`Ÿ ^Q+3ӷd #̉'\SO;Ĥ pO$Z+ ; 8e8nm)``^M(rneZGPEw9lVJViL5FL4tޡKy"Oy6#OӺG1 bxsy;ϼ9;&܎GUpS(0XAYs(-r+C(*扂ȩK^"$\_JFWKyNCv֠sK@`fƁ_] ؔh=sb{L,c05%%:N@&9N@2I<ޮI|3!qAdR5'6@ۛ0eȚ 4aji0?W{xc$a`|n<-y*6n,^?__TE^HHLqa(֏ΧwX D3lj+{V*>JJVn]R2(HV2q@|Yv/^Vj >H{߻zNt!<۶B븽lay1,k`<<[Im [eV3~UYUˆo8<&cl>} do~!~Sz%ceKrCm7sylWm@/OzgwIٹǥ٦+¯?j?0F\N49s:bG9eSz#0;ï;39:JvXW-ㄤH* uIGHhma:tSg."Җ9yQJKJlB-c Зҙ~y)HY|mJ-vrQECE׉T \v_`0I|k>tOZYU<k2ՓwuvbRZsk~ppu]-˱]qd<${o10zr[Knۚ!fV'GX>ƥ~C? .mn]ӢՔCaWM'ZmVdHt5Rf(x| E1Kj_l\+jGĿqku޽M??~~ۏ1Q^ÎYp)\ E(c^xM̤G6 5MS7Ү-nY>tnib-翿K76 /wM p%u=oUUA{?TJW& QS:Xfi9O;KSim &d#᳽yMH`fqi2)&`s@3M3R#ujllxrgCrq3gvQ g6:O9KΉ3b EPmt:99+Vđޫ]z{_twFSXG>ODkx*Z6Kgɰ l%<@rFϵ,wJÃ&c8H=̆ t}#tБbO1F: lA2^ w [,/DTnͷuȖn<Rېh(rF ʥ Jqe߆A.|wsWo{ջf4^P]/Im>N@ ˰' pP+PzQ{hVѵ;R=v JW]tL~ovqĈ7D)+O|, q =j:M7#.#gM΂H bD#Fyr7cy'6No:=V)hb}sVA-m=3{9A ;u&m+|NYEqNE +p6<}& eW맾9n$ud{%cRbzLxUQ\9W<lY?* \"۠LnL )&= %GT1)T;K9QHwJVL©3F!MIt\ik{tLH`2Kӧ%Ls6 0EN93X?p2C@"T?5$OP1l ,!)2"csC x0FRܟNpNՆ-I;aBjΎoTjXGPM>]Bv E y*W>Y01m2W`]L[Hn]-}s‰6NH˩(€LJ=2^AI.K! x$ )IhYJZFL&ZBv&΁bjjo ;h6+CZnv4%қ{m\4o%*\4p'a^p+LfT]‡W=:aZ~v;E+qy!"vWjQ0^;D9.JUp=.ݣYV%s(׳jw zW^PJa:隷y;`Cx0$Q={#3i쿍^ĺM^maZ$,<:SGIVW-6eI}[׺ky:ynI;ancMs8˵8Q8Fv?Q1m'SQIA )=g1Dֻ`\h )u2-Vd6ɒG=8]}9~F4maϮ2+8)↑^0X "'M iN9Z"Lit;y-8c(3*h2:ڀmTTUtt;g ފuF޴xծre9n)*Sݮ-g, H&ϒx5+l>Dx!3pn0TjR3OCљ)S\CHe:u "-j0g V$RC"[ av))U/ \R['R*pw0)&8k'k ٕ8dC8LS)4[j;xv{'t͌p']/m+K(m͛r:$5@vT7EK T%thhX|Kڇ&փDͩWCH.4n"sMZGNApܒdGA9C#}hrM#/|srM3O+|ׯ1eϥŰ]mt p3 OQz{k;ia'gv!p5$Ugn#0^ƃ4HxvO>gCzO>\MBZF,^Ƀw 5ɽdK AICׂ,PsjȞ~ປ>gc'jЩViTS)X s 9W'tC⺏(uq1{C vwW!=+SKuힻk;Q˳m1~apdjq#K5,}>S=Xߪ|r. ,w2,%٧hbqawʌMQ+!6\ifkvb>^-:}P %V|#V)AyΩVDtÉ8fY6.Y;:thVK}KBEW\ӺwjpG+mhs#eW͇[S?wc^?F\+>ʈ~eFF<0mUJB 1B)|ɐ7ybWWW~ 7'r.D7*LyHܢ1y kzY '%E:7}Zpf_~^<׼9u%E]^-KJj4h;O\ zhhT4@#̬\YFqR\px(=~/n:=߶ HLI=77nva_˛h==&Jv> 8 7Z}t5Q^qtJ]8R}+ϝdn]}t\KAWܮ>E(o^8Zvnwkq~ GϏN>0' a)B\[y){Dp&ܘ&ڧ쏧+J*rCMڣ*W+ {f7>*v?>I PM?jf\㓓y`ZԎRGȇz3sՕM}߾~z3;ٜ9=i3ayr弴ˏ'7pU{mo ͉}7__ ZJ/Rk[4i}$"j/hw5G3BLݴ0nhh@E y"y^0, y<nc[?5+FVn9^wǗKsqFRBc9P :B2 _]/7<F|E}<>Gq⬷lo\>9~?sFhYz%)9\ɦ[L</JNv_}j%f_ή CHV5kR`|Z4ؤqJ~XhP&n3!JQ0Apb!*|$' k]+ߍh1sf@?$R,Z[PG4?2)()Qv΍<]Ch1ZG,E2cͳߍ֐16\mS4R]OybBl1ᎈ&d?[g-\2\sTE\{mgM!:7O L!sw@!R;&YlmG#ĀkZ߻ DQw.I';y}b_dbJ*JG Aֹ X2ӐJ\CgX%Z!ߟ7!̪J2$fs5rNnyG5E}FrZk&]ˡg"ĎuK=kdqu~oy$^`Äl mT vSQo|@j]\+'q-Z1 E˞Sac˕2׌g?Xb@`mt:hx |6mvt!,^ E+U%I%dS VA9%Lh**w(:imJ^vH%;AқB!V5PHT@86Ch#jYDJU``o4(kt' ,~/H1&؁qok:pT0u%qRL!N!CM(pm 5G)P#Jo@e7HtPL'*zj]ʬ܁6CAw^KCPw(c¶Xo5a iM !AYPZ"40dU*) |k)> A"g#a0!w\ Rc*]by88)ԙJ%@ e_ ׭`/5LA*l*sP(q 92j"dQ[udWh9}#XfD5{UQRA}hp 12T}2P.JDEB_Aʲ1PM=}{t(l#j|:4R1P(d3Sw(Q,K\+fC2CMh*ف#҉FY {0s?`';㹱*G.0mF`-dn*e !v"}wgs@!/V90(T.%@_140b? 5bL-`ӪUB m V Wsnjdž@HA W R(U%2E[ʺfTCPXZ1i2БkxΪ$1LU7JK 6"N7hnmpy,Oʰ g ⬈%flk'jn,UbjŸ?~ X+,OMc.5 {5U›ڀJ5[sW2\Dܵlg1jw;$as @ |>91*! ܼ17 BY1½W/qNo=mWst&<e3 B]#x@Pl6ڦۖ? ~Ze֎aVKsn5,BaAʼna,φf7*30D9 Hf4Ts ]n[-&:I*W+ҝ+*T ,0RJ+HσO(yRV֭z dP >x@:Ym'w âwRUySiuI;O[T12@. 5H2b@br՛C:mJW|my̮'.М{hc"se6@*<֪U wrm(yϤG2y TkJOk.iydkPiSs7m iF]!fB8mP Fxyk`A BmP/+Bn{@SQ =>\MzOQ 'a91 3)t!wψ\ NeQSҰjZR\UlKS`TKEu,.fjR}#)eⵋUC4&yz.AFIK!J)8Ƿ$[F_{q+=CW*ʕ7ҹnlL¯[vM(;, BX`lLEq$aD-utq FwmnV'ȧXU/O=T7>: 9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@u Fؘ@ltx'PFN>: W@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN:`'q:Nsi \N CHCN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'P@{V ^p)^^_6oʝeܬOz:bajc2.YxxK+y4%U&xP6kd\K7YscY,B\c+uRj$Pi` DD* 3eŢHk*SPGwj2e՞,Y1Q9.QACgnf`͑d{2AC>VV0l\MXMo@(a sB{_'-2YvQLi%XBsT79Pjq=N,{rA[ΧLCkNr}ռiQ !_[Mur} ۮ*u٭}ju<_]iW,RSyRq1\(eOY\)J5{J7l/K4+2Rl,TQG}z׃c_ɾqZRJ"V4wNdI,9ٳT=\ w!KKŢc=PERblS$=i=b.6!6j)bgmS"y*VEi]R\S޿6\gw6t?|/G?UC!Lo۠ yz6m]~hl^-1ƉHύ<˿{%<VM(մU2nzZJޔg-@eJndp1/΀甋8H,p Pͩ?խhխLM+>1?!?LI *W6zi澻xט\=2Cep m\ت%G+W˧F9l\vm;3tBJIz(W &cs\!j\䪇r%?UX>2sj\!P*F>ʕ+\)nYܶDWHTrZ\Q0l3o17NaRat_n814xD#Hv(CsߓL?LcW\0*B\ i ]ʕY\c,:\!߈ep\=\9ax֛eDWH+ŀ>ʕL3\!'B\eb+GWHi,W#WȪC&/hlUsWpuShFQʶ+KruߪK#+V]5<BZB+䪇r%6&`%u4r"WHlrQtGJ`m;Q+َ?JI(W xL,r )KIz$WƂaҫ+3LTzZ(uǗ[s# :9U4B`S*o <9UrIkINem.nL"Q:SNUgmZղoIZ'QUJ\P6h&4wE4>,τ"p4é"+LD#Wˣyό]kҎnc+6,:F\Mt^.W@ \H3#gY \! ^ғ\QN$W≮G3殐RJFܑUhλx`;\۱mWv>EǮ\9V=TgmɛO+u2tBJ^ʕ01\ (WT,r%0Dr\\P.|+`m4r"WHBvZ\PR F-pBZ|t^ʕSt) S#GR I|@=ܸu<2]/hG]2͘'Lx[ Ϝ \!R rep.W 쥋Fh- ]ͼ$W+ 1-`ʹ qE?g䪏rS||rZh qU4c6 )#jY~<UsWpuU;ڇ^s\t%h-ʓ\ݷ깗>kw;Gؘ\rV"W+erR ʕ6;Jõ:BZB+4\PzclDrKWX iʅކ3F^ʕSg{`cyM܈Cyu˭4F`ãiĵ>Z˂CJaI{(ӚfjI)aJŒ.odfhbyco03tBPҡd(\r#U0rI]Th؈izӇ@ 8ĶI"+VE#WkL,r.W@9g\)&hmBi%_A@p ?|/ann6._-/ɓ׳o{hG#7A2;vqL08zrp\OS+/nՄ)ގz?/M~(]VAOb o&KЁ8HdeWܾ9:#ݡ ܴ`;槵0H :aͱ_/&%\\MVQJA;G֕d1zW-wIUoK(w8-hrzKl]\=mnq§c68G!p-ъ_"0T\Unk"+6JE#Wkx,r·.W@iFȕ[\ ɘ0J\!RZNrCRX#+ 4i]3ʕZw=0\!.7*\!v$W}+UcHC&mm̧ʭ]hdqEV_1 $=i1(;ϜzQ5֌ ʕD5`Y4r"9;}r(WpcZ\#We,rtd}+ojy,?Fsh i_oR`+~d!3[[vfhZߎ1"Bx $Wzna"+D#W+M,rF.WHUJXeXLk\!nc:\ գȕUo[nwh},rʄ.WHih0GR+#+v]QOtG%R* Sm/JK-QZ|57Z.Ac)8/؋[3R#m۩mua" \>%P%YQZQ8Ynll=|W4A+ ouJ;.plKd3{6UYڸW[<p%Isj.ǽWw]#R3V`E0 U9*e2e2ʈ[Uƙc^R }j}>s%Nhz¡79uVC5Ͼ`7_rw㾦^q<|UvwuW>_hj,JR@tKԹuR\U\?l7rd:Ͷv~Vo"$P s/G7e1FkuMNO?ڟ?WX<k=Y$ɘt2cVSBTwZx^,3}A(5<[fe 1+PkJKKSB{\7.UYWΆrGB=%{;\Bj Yrxp1[s& kX@ oP8$u"mصXKi9<ٲFkX+*8epʲʋ*]pb n)ǸQZUv(QʪJ,treUZSpY+W9_p׭AɺM/Nc咗3腳lNy^,b˦SMBw3Tڤ(gp}e³N1J~](G*yOߟW8[Vel '2Qf~<@und۷ U.NuIg~9Mzl9tϧ=0=Vi%Ay_LgmF6$翂AWR_.{S嚗1Ej ʶ6u==HAhQ6A1y:zXͦ[nvkzG:lө>\\J)CTQg)pos:%G#,Jah4LX_Ѫ==uKSǨiEqu"<++X=Y^b E 1W惭o ؇}wtiZ[XfmNvRX'7^lʮz-O~t޸ ogukwu:yĨqɮǕ `SE3_/7?I%g+^㌍tp{_LT J@5}9ycY\b1矕fH1El?P o)]lMJ|vŪJ#>TbPGJb*Li <zDOyGun8œOQg4`S^g2MWh{:RJIFBL1xC->罥BehuhSYzMAvB~{UpFuZa1:݌Kl+Q[V)w=qiA8$aT%w^H+r `2 Mz Xl ǿ%Ep,Bh%7Fl2xĄ'jo쟨pͨrV4K?zӑ͍UdWHy G>ߗoNٸ J2>x}άI b_q," 8 .ׂdA>zs_P*ld"={)qVJN@BYPBhڤ4Q 2ΠP怳-i&4wq{Fُ_G\vr7-9eϸ(\pq(T: EQg֣C>;*r<+ s<~Saoq,4@**l{_]z |g&> n~`md-C<T.ަ"z|;Ǎ4  "w4_[1G3QJ%0Po 5_~Lg"ЮmYlLLKhEdL뀂O&WT^r|-jn>-:Uaٗ40FT-+nHEAx"MhO6iRpg`G7A0-d85>KC&C"RԷe7Ćy@ȞR 'pqxf,q걼9{ K;8 @`HYu)}W''_>R7r|Y>~(|v]T⃩g2>=,(LrxfkW#D!|T9hb\]}a &ubދ .꺽Ҷ65ePCu?)DKs9^\0уx[hsGMv9HG8Cx4N$j >L"V7.@[m[:Jiw{ {7=;{y3+ϧ:Ց*'#];҈úޭIֆpRjd8o]xeQr1wn7Yخo5"k\l>@Ķf#@o"$w;Ytܘ2q&ͤK~]={ݱԢPK1OϫE\S_~\Onʸ&6Z\#`Ss%ZrvJw;{{.AuTPR;X.W7# 9!?tF8]ulvo(_S:#2Ns\ibXNNj+T10 ZCtޱٴf=Rn-lۆ'M_fv PjzI=]uma1/U& 뉻q!ްu_'˚ï1m۴#\9Ӹfal]7 t^{Z11@Z^2ͥ 2Z׎ҋ:x! iw<}KQ(IׄBz(GONI8D(oz^. 7R7x' ^piRΗ~O)OZawkInfPjC4IOpM;-N5b'a>Js 3/a*@g<$ (d 9x3%;0~Yrb~̧WZ FhN !TpmIY'T>65.]7}6kr[ZV L=TISw-LVm8F A0嘾%{NC@e&"@,2x Y;eή=(YNHB%LZv!%Bm%fɘbU,H)ͥ onD:ik 20eϴ҄mo'B]>͔kGxYU&\{z@@F uSM1(3QLcz o):M˫y88X`H)%VB ׊tD!TO/s톰oNEV}|l׭I@*gT/ndl nP2xLy&C%&MwC7Zev$y|tzs9;]v0a˄Q f(σYó@;gyޤtcUͫ~M89&T'/3l3o4q?IJ"N+*>MJv?h⼨%Sdug!ʶ{5}~WA^~A+IBmM7y{wf jX~x*\jlh^m9oh:,RE-ُ|Z5oP˟fkvX`ۚEp1Gjcq7](~D\yׄAz,heO HX(&D XԑAyo~%IBQtzgGvxڏS'vp5uj$rqqs.AE`jd>jKS(bqscgQ^/K2%.)<@cyyݷl rvx-=[\tV]I}@STMЊE%i**H%i 됽kst?XE0>XҖ|6q ֒ aZ‰dM'yhp_:kBŽ)4,sշQ+Eş%ۼ./uuF|hΏj(2yaav-xcs4LlQuG qA L%vqC..p Z ^_ĖF:U)|.!+5/u [DI7ḍzdzY颮Hջ4sbp>+n\ "uYFׯGG%UDsE8rqzob:qV]LmiwnJ?7W~/vq~*h-¬v4_^5cخ9Gl]YsG+z( }0Y;lHyku0A@InME"lQD**/vR n/yYK-m-,k,oF/o,h} |,?=ٽNVuNZvrYK8QA}k0rE0q1Kq;8RVT*@'>uǙ@8O?Mϟ~|&ӛaF`, e"HIӯde8<;\UţI* 5p7E>y(8@}Qc{+lI>]z}$j08lXN֕6&ođ: 5l8fz8~bAlt6rF #)g& EQ@mlt99wĖQMDvo{:.U-ã2X 350]75U֊q.`9OF^䴗;kHE>ג)E 0S .,mԓ}I/CXi}@5xFf UbD-N {5V.-:Q7JuoR.62w 3 +$T΂#"᧠o.TFxK4ӽ{|t90h`FtĉDi jڰL N,϶ "<7~J=&ŜXD6FGמYXx|T3`9 `\(g>jy sZD)msk,Q[eDP"~(JM(9 nYd BD 'tOH'8'jC$NmyWoiVvJ=$&ca/##ŋ`K}D}$a壶}]/{]6i T Isɭ Ѡ-6OTsprj-r: 0pA1RG+(ID!# x$ );@YZFL&ZBh6&Ά飯P O2+F.y 7tr7[Ox:kK}Aߍ'&O®}Yԙa̮Z7z0u¨-곍\)xC&DVWjV8rs(]?ҫT:+&<\׮͢B(Cɼͫ]5j^_yxk%Wh0XQ/}Dyk*? AW^ri:H9#GIj[7Wsw®ăWwݱsk3Yw*1޶~|wZ#$|w,e@B|PBcJjY 2h )ur-Vdo6ޒY&Rc6^X/J.:Ȕ.4N25HfwG/$" FyM iN9Z"L2Ҩ;y-8%&PfTetۨQ1$@*:dYo y3ef}*qOQֳCp֟`kĨ<^vZ=|H[u7ul}o$u^+)g^'5>eH{%'>II4/ѽ֚!@`,U|vAZBĩU \ tPdWOptdzR$-E8p4)):-"O+նSObJ\$ WIZupRˁ+F* K:;US$%-\@bH1~N 感 \%qٰ$`WIJՋ+%pmovYF?T9^foj'),z9a$',CKD?#}xl4E=f&1α.ª*8%}{~s9?'V  l@ZtR0r`Zp}Fp3, \%i>uJRv%•)UX 9JҒ߳KRD%•"tFpcDl쒔LpJB%=#q=)Jҳ9bH2~p䤅gWté֤ >I`vѧKqi2yz}\zL$ V\ \%q}IҦ$%n%eRԶ-G}B>͠s JP~ v2u> *EL)Ե !qim.h9F )Q62+0̯q/QzI w;6ouz@E!5!X(-yF:H*0ˍΞ`oNµ0!~ %?f=U1b8~i66% 1H1q=N$TBy1*)!7wH#-(V^M%XBw%םmiԵ4LCMm7JOTv#*f_g#gFqR i9L7zi)Ru}I6Y#Ao@ӅXsbIZOQ-{\6vbr1B.:&0]л߂.LrJ),%F X| Dc2_ƔmM0s8& kYvz5t]PkIv%\s𛝥J`^z<腼"պ*c]3tny$XLjAZz0i )nqsm7`$VM]ln*ЁnLfY #H(򹖜N)bxЄ/icٍ\9L0'a/&o6%6wW #ٷ"ZXHnpC۫U"a3%/H|I\Cf̎2Lrg:nzՊox٩4U'wxY v\/8 O~fu 6]FZB*1 )a۸׃G%P #m+?˶A V)[Uz&Ս9pn v;.S20B( _whl WnIMl#$fh} E[ znpT4Rl>%jjQK[Q%Uܮ 6!|bIX+kqUH?M)~mz_0qGp4& 5sIc>sm䕷Qx$rə oRES*̑UXJEȾXdMYCE.l&=#Bp瘃gqNʓ݀$ACNt7 I )%݀x7/~HNxI%f+^hk~s,"_tꞍMe@ByY|4?sopif竑W;yk?=8+B_[^x3W+SjBWL^[L㻾KeUWZ~©g fXQ^qE&(x#r+C(*I7VCsP-~[X9ލ\]&I~09Fm,-vaҦ3YW5}|]g^zxg_N\N4q:bG :˖xgtN;V7< 財 NYR睒 IT<\kC0dpajEBRc5Gc$ѤHXG9؄'AwZʏm5Ҙ1;%5V$PO FEN bGQKӡM )J( )%L0:x4Ĉ%h' aT.Xm8P:RTpt#ovJ!f-# %ᔇ)b!BFjnΎY0.Su|) efD XfӮ5:yIWChe0~-LcIoL/[ /b[ ` .ӎEj2{wٕ{됥mۯ&w4UD*mCJ>fL 8iuG +"tY J1u3ۻ'7yo%P0 *:mqX`, 0,vE!qOթ駤A_l>0> uˆD1m?LUW /ʱ@ ) ӄiFq&kmH q8`Q28K~v]`bD IY[=ç!))QĈ,s﫮^q+>09#+@[Ӭt)hDCۻwlCg̍AU3,v8e%>o18+嶖im nښa͘fVG\!G~5ly ,gdklou>ȶV[j4[.Iw8_#aѸ6VMbN%<ʺ}Y[*(?7^8oo߾w?}7?L?[q30mں]c7"IG6͛v7=vSjԳ ߧ]]r-nX>|nMf~ _{}z; O?~x7`Nzv0-w͕|I'+m\7 ? A&F_dG2S]*C ,Se[Ʀu4$DOʑ89~@-K4$$3D]٘t@34ย{-36G%(O R_P&/\7V[vwfu#q[>L' Q! *Ix2 `6 R dALg?[ElߧĚmeX=~1Fx|fe"Z9^(V ujzR:PEsouMﻜyshSMoQ!#{3:3ɺj4Cdoڟ>\,Ś^ZUdoiNL]C<ۈ{f+%\t0Y Ooz1oV?Alv_‘ܖ~D#G:ֶަp~s+?Q7@9mUyMsD[~Bs/"ˆ !9W製<}( .q/7#>?pt{cٿr_գGo?܋bNzUnX/љ8#ͯt^|xȴ@ga 6QOmZ 1AxtG#UHӮF>0wP617U\9Xǧ9Ulg~<Ě]zI~1 i{ͻyx=[V&n璫~Alk@gi&AڭVYoWg͉Gv1?>?Zжs Za͋jazњ홌c/5!KiqwWJ$ή4hy8)-:(u bsrHCGq=6(B$Ab 2BGTLzũfs`F0m&**n@5SKH!G"0yL(IaMb% zVP^; ǣ {8tq3\UUdwA&d,sc\ok$,J!'ʄpnTeH:7t` "RQS|[E _FS \n";<:eP9k(vly~# 7a8_H=SL۵XOT?EH4)pys}n؉BKy T),;+'1c4VBU<ɞ1u4yR#r!`ra=76p*)c $gSG;OJ33eJ*SiX;1"jjMCZ4Z?)s#S&e ѧtt#3|3rvTh#a dcSF SM.:͚䕋KEkFM0/Da8t@TB:J\YReQm$"*NYh:7u!$hg2Z9 j*q@ Xf)Ġp d/ܝ\!I\ R\a::jZ7g&= BVQءBG@ӣ4Ϥ8`0W QP‚7bZ ,qU$O'=q`:Ht $sC"lc9=#)rQg,gF98 uH[^MP2ow[f$cMp^NPM>/}Tlv۴!`s\UuJ`w乺N f &I)bA/(H˹ C 6h,ɩ1L/A+^b1J&ijb ._L^"`M8*DPBKM5bG!OD!ySHgG!QHvYG.\N O5 Dxgj@"i%BJR+ 8%8&YfklQDw or..qeӽca,PR+K,,{U;=%ȣAsrIgL g0鬠!1AaPɜ :9NCbbԔ5|dJ QgJq:])  w/ aEP"eIHCr4_f=ϩrs[˱kc>mqpC@0ٛD&#Rd&Jļ+޶^KArN+ĝ " ԐCNDԪ@9f '>}P9dgJAH(~PUhB>y٬TEǜf*5~D\ D3ɰl*عΫZ0wk)`I&)_ѫ,阢\N ʼn *} ;56>ߧtOWmV_-Ơ2@E$gdeOF8ϽY^k>J -0Շp-?V E@JM+R gy`NEnqC)Xd/̐^C|%(at\JJMd3B5E˃CH Mh,~XnYXnןbowGwPky{QB*{S,7{B[6Nr7W߼p?~)qOK1yz{=i;b[٣yȹE.Dz36#wlVwP lh9T}PWdb Q%*Ke*{A3ȳ[g䷨o1$BvV:]R P(D0gI z ŨP}het}N._F"s;C(+dO0*Y'Ne 7q:%5D&12"i  2O}D8Gp&vh_RٽPqai?Nwlefx&NTu >L2==g]T+>HG 尸 G#.YMFd!04!  qjFM)ouH2T=q,tJ JIEKkbV) IƁp5r.<4Ӆ{µӻ/7wyg;l@ea8ϬS TYK$ D"y'K4%,k#'R }g\8 :YV\*/SPKmCF F&JܣTwvkl7y.ZttZCC Db阔 6ƍ%ڨD@` 2mΈ˦OB*y@dHYQ x"^"pDX#d b٭kR9쟍F,%jDVX#N#vq+ Q NQNq ;'>s-sV1O|6)!Rr(S<(`)eT4ŭೃ[t [#_E|d~9:IɡzQ֋N/nxS0jM8ZCI8B"FȫB97{Fne ˟ݡ*f2Lewv*)ʒf{lJֳeC6{= \{^|^>:ˇ<2>r3P؈'2kodW?~ՏysQ8/R dzw=}8ΨB$CM7h߅/ݕ76''rZ b8# V[ęlH=.k„H!A܄JCJYroa{b=bDYs9HQѐ7FRеZЇJdbBƘ*V8m"SnH=l[aB8lUF2d %G&Đnff`meO/Pݮz<;l(ҸW~È-:hts"鐉 je;gg-ZȢ`PU-YI*'qg6_{=|sV@bT7RU˖LpɅl"`F/ܞyFa&C:nv;gqc罾`"D^VkIv""G/:QJ!5BRqp3~4䏭1wz]Z }¡@=HŻ9^ě}~ |1]58sEj'e%E]qVIԪ9hvђ5ಚD NxUc66Nח{oK=.i9V(_&[ڂɿ\P8>{g4ڟ&O|v2Y.^pG>r<8:I/oGE[4qPC{t[ӫ3h ZHd -x:nmthz81*ѼD]9Ys8P Uً{w@rټ4&h\9ͭm]g!Ihkk'ԭRE8zF6ro/s{su6[=t#{#N)/6%Px_gF %[[Bg]}Hb/'T..硝J}]>R.]j݆qkNϮ7`meͥI{5f-[}>ts; rrn6ϫE)B?*ec]#Z`C[ZZ~5eq/=aq!(榳Z'$Im Pב砻s|7&wYNxjݢ(>I~W.߶A/OO/PE얎Y@f}oA'W}w\vׯy l~tѳv3CۯwtP}˛|?be&nUߡ%-Z'[~]w=mJ_KͻW~rnUoUL6ayEW@YNfOi7s>5 ?ih-=/էt{~!c{I \n!WoV[_}oZ|ЄŸ X01!CKv<_V|-߭gUzjכINhLV7~ܕZ?`=Po'`|~X<`O?NV3Z4 4l qqv~ir6]^۫{зsTY-8ՐZg#z [4:FiM5Q\ԃQ]жzxT=SɅGaFBԐ\EEeԱ*UePYW#9h~sltޢJ#>l/`ӛp~EQZ-K1%9]9L>cjEN.ڻЎϷV^}?Mj=v{޲j:Sm1b:ڙ6uveռSm |~Ly[2;CW BW NW =]}3t9\-iR_}mz\E_@WCzJet@WzOW +w{]5V ]5o;]"HWZGr63tUCĶUC垮^ ])';DWUkUCԶPA{zt$ޥdx70]5F ]5NW {zte/6WVޔ¢77nt6V-tC%4;/w+;CWΨʆVmr-Ӟ^]" ]_o0 ]nUCioD_"]9ޥ7l\\)vZsv %^"]y2;DWjp,hh_]ypᓡFԸV;aUYi/ףb0}Pag{kBwI:5q~]|Z$RX=$ί6$관rb9o/@|\o` .KS9*QYHGˏOxr{s`F}JA\zhuJܯ}zFoݖ%qOV zn QuMGe74-#PQ_HP1ף#ep@We@77Wo3R1kO#DWf#[5_%&vե4br0e$^PTLMes]0?>jN#Lo}ãyC+@qWW+ͧG( r-pk+Flb9 (MK) ˍz]&W>ZBRHJb4c)+fLZ9> ƎPg 4$nW@F;i-% NȗoW1' ' I4#{[ILU&Ѣk^r>RƤb[ VgYMd2*BIb7]9!I\aKwAl*%}bEI'Dq`-Ϟ&Z>W Jh2e V>J!(fh;6L is0Bi`M2: b*0ߧw@\L 2*ZWeM~)\<gX!>.@# I'a,ώC*כ,u!LF51Y1v1j(mX ߟ7!PDVX5R(C9QJRUy'*3HzA JΚ'@Fwcad*z7x.(+ޤ|a_"|0 50ַZHz3&X1R,@J[K IGvQ)S8i%1[Eփ`OĤFIG }2P>熠F˰"!.ʨA\P!QC!#Um)T`/# ʫ!hnԘ`-O`,|m>As(B۱*ÊPvT3pP:Xʃ[Wa$(tyTp ߪByNq,AS\Z o(F+1jY33昜.\˪ A$ct0+EPf0pنs*Z&X@r+w%CA`KFTj) }A1Vx XId%TJX*i0V( ȴ4 A6`vR, |!1;"(PUJS_)BepYeHcȀ`2tj\EPB]^@p!ՠސwbF0d\ !̃8P Ca<,*D "UW]YkHKUF1J ƒ n ?%Ad $]MɷL ufәm@(܊ Ǎ`IJ !%+%C:4Pq 92b Xd50GY-,JAW/uް; tGbk.UuA$"FJ FD0ϋE/S,l!4PJ1DY%"D-oaXz銵񜐵"?Rc00,dg:ai/;t=^Ko3fU{'2J+Rvp'"$B&}of$v3e!uI헹*sQ5uyZG6M$ >: :p0J7#s Ny-$] r*=d`j6tLF0XvQ" f 1!DLW&xRic4.0A RV "20 -6Jr@#KQ''(ZҊGlٻ6WZ4+HZ @4ɗIA8E{,)EI7z3;39̜B^VB#ah;-0X!w+gd*,hb⮆:X B'Z@6DK y1h$!]%@_GWQ*#Dꊼ&#)ڽ,|(#m6d B;k1Y 8A9DFϐB.ds=V5T3QchdEF'P)@#6(Agm]`mΠ$1,6PMC A;H-2r,2aE^Y`H#/,*VB>H>K4Ԛ5'7hA,gi&0QIA Rh›WTrV6YE,z,W0390tȾkݤE !K-&kQҘ5kc8)C^"o*$9͑%t-&/εmodS"ʥP4LTP=`TH>"Jkp'r5Y߰Q%úI!>XW*ϕ+E@s0SN#:lB+ϨV1\ة --J1<@JȣZ2vCzmAAǠv`ah~ xMڰ(RVGϊ94hci =72uXHkШU gF(xۙ+byPbҧ D2TSEc?e ~FraǠ\U8/ڐz&t/5sO52mP a A;XtdFjC/j@ ~4BS#%x>h9m9mm!$*V: -cd4+&)$d$"aB% xp9@B؄t^4]D Az3q1yGTvԫ-f^qXf$}K2NPZ2_bFwHNQ݈vS=wmC)F7{kkq]5Ob"u6DG*tH3)x@GnhO݈@#'Kt'''9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r:"f h?y'rD'B2r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^ȷ19ph:y'58@q#'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rĝ@H'~B1=s:::-39CyjN }HN~$9 @@ @ѐ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r'ЇKU[vz+~aޖR͂ܩ\|qIiqI mlRx2.UUKGDWmg"n(jph誡骡ҕ@؈J(:jph誡骡kDW/d#+vB\BW 9X4]@B.%]5_׹U`骡%ҕsbOXMgs5ϓ6i][ 6{>jo&YptMs-ϙ;b>5iP9莋T*9wi$rknr_oe{6-5G|$bNstPͬU_jo>K%!HR+'.̛ByU'퍞?Wu,Rt34cVWrYg*e2W|ԭ)s:*UҾ%j~y" s6d|yjQ |1Y:oJ̕n/OWm%uRةMC]{F9 jVM5;D>GnkYע+\N%滨hkY'ղjCd[>[S":%YժKeA^+J{oTJ~D%?DlvPűNE˻UE5 bBk'ň`7y׋ -ԇ=5AC /gh?e1 gzyxCUCih%cgnG4q,ZUCI]DrFQm$=Ѩ֞**HWr9*K+>uѨ֞ (5dCWUoΘU<*g_t|\Cq(͉9Pte[;vf4t7~]5Ҟ:]5%U{˳`ihֻS+ާ^ ]I'#+BvSHWi9*uJUw4ꪡGIˤ+<]jϹtm&-sy}9_q/_iw?'@Vl[!O8[ml+y1_G}M'eVM.*]Wnyo*a4o7&(JS.p!i6K\_yw-}lv?lZ?_f_zv@gu]vm|u]mm|}~]c_r]AGf+􏒇G$#DzTM}u܆ .=dvׯE Wi%>lM_>kS(nD4*kQ\JR[7o#>3(Z٫m{j\̦m(i+G5@GMSjs_ϡwlUxo?mM۰^CQҐLJQ ܖrsӫw-joys u!# ITՁ #(Yad0,jYH^Ȃqfbڵx(e,jM|R1Ǵ" 4[k,`p+$S)Cw*-,cIf25rK:" }OEI)[c@l{-t7zHlK.皙TXńEtzŒ%3!z}o:-W}kRFiU!;ZU `SJR5˜+\ŚX*T3cW%-~+W_3rͷsgܸ3Ȕ3=;k_^@?].7O:}>( ϟ^mw <;"U& a,R*|"Ja6koKX,~e' >dã,7_ra£0v8sctG77;ݞ4y;]]L}t4?6r1L<|h1y깭M?V8D(FY]IR Yec&ˬT}V +Sq:+?T{^LFߧhy}nΧˋَ~X_-8z{e2[('o/|y-hELmo!܇~Un~:_WXkEelqh~fvQ)??PozѾؐ\!C2ld\X2޵q$2n;b/㬃YYK Wp(zfx8$E %J=]SuUuU5 D9#`|`#Fum[#g;F} }Hx4싈eD$"v%C gK{J MsxRC!rZV!!Ɓ3uS/ gHXA`IH%X҄Ij$f.\xpk:&_gk\/.qu&1178E@Fc C[\kG)e"f;\<. h[9K﫸-#z_1gP9 ( w)Q3]|ߜac/́45i#Z(93Z.U%2[/>åy41T)>^i ;Q@iߜ <`gF%,@UKsi'ͥ^AKZ}0]Fwf4ѴR%ZjS6KEUk&*j~h1a_at3̗k8m4H-"Α#lAs#"Hl;lWq&?15UX%AG)ˆ dTLX*(v[Myy1hclQi80bQG"`"RSш0+#eLD1oba#z[nJǔ;Ǡo}x:kH;lμD}mӗr|uEAK vhJ2HN0L7ʍ8 k6>&`0|+@8$zc!C-ǚs4 kX[ʼaLHdYvXH>2QDp0"XZYh@iZ#g;0 GNCtcE+C12aP:AĒcXF!M.t0p'gEs''!ʨt4xQl Oy,23*dtR3 g//{nzaM¤腯U$G&zѤ!o9߆ >! \M -38)քV%iNiPV? _R悚ڞՂ{wyGKB߫5ƺ b;/q0|zfU`&e\ظPm3\F)M XZc7=٥{~i΂󥌀7u$IHΦ.ۭ˦]ۯhiI '8MK޸Pd0+`|:1oNVҵ㈅;eqV5^4g TAloVTևhӆ1km2 Ƶ_jM>o/HO8]bAI!,f$hoϛE5)>.ޠudc*?/a6Ol>Ƈ}"ؐpw s( "GJ`/?Y'x]kph7b<"huNSo~r}|b_ nXPjCtޡٰjtIFv$Af T[2A= DUha=0ev,Kf~" FoSlzóvT1P8.1*%Av}@{ŞIqk뼕Ҋ8x!Z:4k㾤DGJ@ p֤F *DC"”)z]<ϋ0Kqx_Ү^xX+i1Ry3u`'˓݊x`! 0[\~i,DYI5g1.RV.w0Ӝw4ǖ;׊(y/Z% SN}`}S1IŔGe)<3ߒ7j4/&~ZP79U"ʙ8"1dRNjb w)*S4bDb|yHНmMt4TxT"%1v2JaQiƝVc  BʓH&1qǒq0TgA̟[Xwakp;(zHc4RNǠ(qː'z*&r*-;/u[mMR.P"SJx8H!FD,;,QcrAnAÝ +HwBZG0ȃK +b!BFjf`\@HO$ I5?M4{΋v,AI h ͿU] lTƬ9~SY ej?νdh.鉶R 0Q# e+0C0EdNΥSwb"\I1: f+TWEEWY.`XK.C9֭JӲk3+N.m66%-vW_BոQbHQuJ8q;!W*-O0$^I >ny;&?g+s[gUZKR=X>xw=?x\f$c\a/zKΑl8|=-;÷`/7$QHi0+XU{4~4XٿO7MNQSEM&4j\" !)S=d7hEF1K'8U^߆ߩP::Ǚ/pt??;~<{&ޟ}v? LMᤉM/"@=f ?>'=rho64Ul6g]/.60qKc[n)@~~n.Ϊ<'ekEO W+YU_d^!D(K | !|NڠAyA e\7Găl$|TH`fqPӶebth]ə 怀g(fG3Άq#/'bAlt6%$8K#)g& EA;iLH{Z>nԾοǜ}wL:Qg yN]X,-YcI/&] n~߷otGS|邎hr85>8)rtQh"#"G%#P0(),rZY 3ze˛*_4Җ1 `J$g$nܶڎW<_l %Jln1ӬŧrΪOg 5{dXT{ dcVUΟfvv׌Qd-XcsRǓM?Q PNۏ2!o·#"k2ksc̱|:ڨ%YGH(򹖜N)bxЄޥO9*Gi2w\6l \gYc"/!'`k U cJ#68R$#Z{10wb F>8!Z/u<",ށ߮?l) }l ܇8:ɲo"8߽k4kx9-Z\}@O)2hܖ3ԒXKo6q+,g$LZy)7?~-R!rFsY04L0"ᷠTǨRK9{3У8=yH4ް >8EߤD8I"Q'B2HVR0q2:>XRrVp΋u/: 9˱mTS-#%0#*=-}nDp}$[gW&M(E`ȽhY.4+1wf1 Sh6nW'yx$5fyf \Yź⽗ŏ't@ʭ+l'ͷ.!m~vYvխ$>mRz ۦCςرhb=ps)i5~AN]e߼'J\yZnf'|VwcRJ60FEII]컿.HOPb/ADồZQ)020VRԡ+!ԔRJEGS#TǍbz9F›j~a q?Mf_0f*f|YzyeǟҲytJ 萄Z֒Le##VrjFJM*#vV x=TxgUb+Kg}6%3u@ңNU ct]gnKeqOr%wٳdzD)}d).VG?5uS❴~= UJWC9V) S+C$‡B v!΅d, 9n$q9%nЊ;@G)TTVI*C\cƔ)Xw-)|ye嬯)'`8JpLulrnc;G8V8 9_ͯ,<Ylݶfmfimd{/{p:MV=yG:hѱЊZx@b|,B\yH+IrJd\ mշL mgfݔ5X8T c`H>!|WF?l9C bGD1׊j{-$$I22Ƥ"ӸD?RRez2-1덓v6>Z>n5Jx|c(H@ñ: O XYVz^fH|}{/_ܼK_uP`qQQB/GwWW4٦`֭3lxOJh[s *O`tuU2!Z,D-y1MtV)Bԥx[SoKwśM6'ϫyVNngM;$s*YNs% /ZuF:Z*B^z ./:<+kd!Z{SN$BxM W9Ux-̠wL)tFB5j:4uJr.Wl*r6zJAm!ʕ%566&4UރQ` l>non4!3^ůЎ&"6\vv1YI*}jsy&z Ӛ]G(1/GorYiVMF3K({yu;^;.a P='KބD ^(r 65vԴ#5&CnmWpgدKl(%ju 0TՆL5WMR2/Z ;@? O#_G߮?OWeyvjij[fy_Nrez3&,W0wg@X7ڗe%W(ۛc+ALV=FZ\-#&vBe:\1. ާJ\!VRjrŹ*% d iZщgjr%U'uV+WIFאdƮGo]!Y(W2c}/ XcA Fd|RrLX˓CZci42eZ +IJr&#WU*r[ )i\iɌNifȕh\-%я!ekpȕʠsΌ(*\!~P !ʒ UFS+O1e$g#Wcl^ Pү\u=F)Y\re;ȕrԪ#TMH\!.c \!TY(W 8LIS\!.OFV )r5@ Fpa'`Mi2r}]u:vB RV$Wұ h-]Ve\IBڡHGՈ0ddL vodkVuPGQ:e;hbjhnS5-ĩL5iͼfypn5(Ek,Xc^(ڞ"c!Z@ܾwvO C CZPi#+6 "MfZb+< =WX3ό:yf(-ɫ(W` S΢cɬCZc"ZIH$GqJEVF)u>䰪dLP! Xwvё=)##Ar6SZ\p\4B\R+=ؑ,W+f"!Bg;G#PZ!ʕ Zi\NȺB\-R+5$zBΦruCU'R1pՎ-]mIIPL#.K FZL#Y(ӊ2q݀K F\\!~)-r5@TjҘΌ(d쐶w)M(WVZ32 qu2 8 )#;7i2FLJt pMƺBڗ|BJgD?V=&iǮ<'TG؆iYZ LFݝh )TGJe\ P'qJIFWT i]jr9B'$Wz˓+U[WHitʕRK\/8?\!.+&JfZUtZ'v"D0\o\!T1 -_n/8&2#ZӪؗo FbLek/HZnɭ70ݪӈ#x;jֶ"aM'@K ̤4q3ό"PU4C ZP )+i2r}oF\!edgf:\i4MitLG+&HzN-Wpy -大ݝ(lFw+UO<%B`͒+5$BZ+c+dm>rubVsM+T&#W˒V )e(WAmBr*gqu2Қ+2JeJ \!U!6vBlv!gVǺQ IV[*Iu7URBuˁD 6*ƈ:MfM::e̴AS3N`ܳNl[#'եf5f'ùjrfPv_]y8|%<ƋRږ]o]eyyΛWy%K4\_TנgP )АM$aМ2/*囔 \o=͡V϶kBGtgU~-M^rח^]4 {/@89?79~0Cm2wb1O*>UOuxn%g4Wvn{6r~Q&ggTD CB]23rAHtIBTZlp4)˺sp#-{vЇs"Up6+bL ?I5 ;Ǎ$Wu>h_RT*EYUY驮s0u*?V=ҵ]A:d Mmbլ]|C6-e6^_^|鿖6?Wfov{|ts/ olŇ㛋["7?-B zX6NrqAepe; wuQU`7q gW[7͞#~[r^_.!ncFX桚֯Mhl oq0[(-L歋l /W .~ӉՂAz% L*ڇ0?G'I9V9[[rޕ&Qc(>4Tg5GLďTOgXGButq(% 1G0s7A@|gW%f9& :h`&2Υ+u"ւZJY2RRO1045;It4M*I$fތ hW&ZR{l #XA z%MLňG7}ֳH/~Y!:. xׅв.<eCJ?{Ȏ\O#~H.vKEi #Kƞ@{-jIP3bWY@sМͧ!=O br(;a;¾=[l{/[3@-3wb4r_棻Έ>5BOi|.oVe6x[2oZO{=8QL0̀C/!0,X bTJ(_b.(QՃ:97$u>CvQ H ʑAef~ʳ>{_rܥk|'e錖W}[(&-W| BLDCS56*H^4HIHq$8# )BB 0${4"JdnI(2u$R\a VKU #w.:q#'Gb-|@Jc f@D+j'bgD9;}5\pyn^-BW+}?pAV>%0M$fW W|u2`ͧq7]W=7I?JcAQMJ"7z:)&S<ϛQ;?~F1~{w/p4Җh{#.1%Bʂ,{YdVڲ;^ZZhJHHHM$XCBkO|2g?㯋?GƦܫb(ғ"UaQJo#"\zEA i@ Fhԛ!fI 1kcWW Qj-!% ֔ GXR'Ihog Ue_f΁|/)))TZXƢ&RV.="wq&F%~٤oPYChC]O{@["l9)^9#2'ً5r t^VWmhJx铉v(M*[)1RN %Bɵ"HKCq3yӻ4?W?:;Ƕɠa:I(l܇I_}+%'kDJDD{Ku1IW:HƩɏ_\(>-o.NZ?]ooy/VAn'H\@HBjj̤B2jR:"@c3(Q))f`nIg'gO@DRVH-SdF2XZwt=o-M~?{6]&fۤ^VBDZ}#<_Lw2FWEqxS$9Dz$d)z嘷N@""pˊH(A꒎*2iʈ9Vfs%=)Z!Yh$L62V3瀈j\VGbFXxַ ' X_ 3n9#KGW|cVƹDWrTJ*\!Lf`#$+hDR{ޙt2+j" ()FaB+<]9)bM)h]=jt\PZP{`51 \66!z)&Ť 3 #IMu+y&f(%ĢIh8F0kdDu"ُ/QQ= ""b@wxX$08}%uitx=t/'`Ed.\8c*I+}6KJ3ddt*dPV*_jo@X6pNszɱ+pq]=>RVs <&1gm} ćG(^}R .cT#ٝ*!?$w~Tӝ#P!<&/ ul5" e'5QMs#1 !SmnyڑwT|% <AOw!ib s(tfv(°ͥ6n eyx~v{N6}'[ہi2޺GM&=^ʁ%uâDo@S$%LY@~K~1{iyש~68</ghS&xm0ұ(dI/ 2":<@>[09p)ks@Î\xzR ꏳ]ϳ/H=D eoE?ʊX[!'AxQr+ r,IOC.(.!K$Pr׍?&0|<`Z_2hfKå @Al >t  YunU&"*2CemɣJ)$2 U(EE%*C5s8(``;wz?Ւ5vr &oxz]IRJ쮦x{Etn&w=3k֑O<Y‘aZZYt,+9JJb6<酞y>q$ t2ăBfk,"OڄDE9^Q2Ť y Oǥ_?[&F^;#LJ@ y̖\M];ᴔ?fs%:.4W-4,`SD84F,,K Vj},6̿TF_Y|B*,O=he<dmg@`tT8̸TNzU@EZ~Kj}㣟tP=V}4τR/r>i'Nӊ>nZaQ*6귐+6[5_ޠȍ*79f`,9LNfﴖwZs;}ThQKLxuA- !] Wܰlqꜱwuv0u] iWue ]1ֱ4xHh#u{?Gԉ0m{s9rc/c+F,5&XE F"0xMEEc:2QrȨ=S:DTL_ƀϙM$$G,[[obY*gx AIN .Mw(\H?'8б۳w7QWZwq(BN1'Sx(C JIև2/Y1B*UATL:D" 9dc6ڮSK@ YkY,ѻEx5ݢa{14s\^zW믻sFϼRPo/^][0n[k?ysUx(?O%ǫq~\wCq?r/Y/ZdZ #k<9YHYžvto7.uf> ٷ6i{_tP,}ök 3[뤙e h]ZO zr5Lw.ƣɔ|OhqWc_{n{.^;>jXl.^e^7?^S<9Bо|kOEJƁt~P@d2Rb> ݉AWl 1 Bh|䲩A.!3H DF}` /sgӲDŋνX!D#Q5P,-F2&4yOQLLN9tہ/k6fxQNܨ%P4X _IEg0ٳ"dF^5=l'W>;lnV;1Άpr *uמE^eddƄ& O"];+&f&f 4F&b3#%06fͫbԯ"m;>nVPyO"++Ml:;ف-ͻR{CsUmUo~4O,o'_"|[V*>'kNZpqVd~¼Kr\9i~ۛ6!̶M#Vvϖ@iѮfVͣtq>dhf:ˎ@?GшO&,~>\2S7]ouMRUȼe"+`V> 6z͓Gm8ocsrj<Чv3c\D;i{9mNN/.W?$,r%GNwqkVpLxG_OHm֧zp5K·WfND؂Q@v6ӱEە͑ft>-$o1y宖lκ͸fV +mʣx0}:~dlxySl挼y'Z쫳r$@H?DZ9yi*m4{?>TY&1}p4?1w?o}ϯ޽y-8̱k`܃! !cǻ7"Ilۛ7noMHlҮnP\:øXr[B|x۷p29ҷ\xjG<WQj+4dѺ5q/ .|>z`{?Q_-/,ITlulv4_sҒO !z77h]^X,e.>C2EǀűѮ7HLFK PIzb>ÙN/g:;h7ص({.l9: ̪CDJ+la;IIC~Cق14(zt2zӞu$\˥|dV?g͊]t6طvFvBa~=QX9Aq6(^ !z7h=x1Zf`]١7XGzWt+,;x*J&&i| 2X{t"xJJFQm!~MR`|N(L1bS&RQj|c,VTmmn#2Cz27$ jr/myz)ӤߍgZ,SS|"S2w//Hgq:= mFx |C$8K`p07||ЄT6RsF3 II1'TpX`/!'Eu-mC73ɞKKw$ov[&&< cAX{Ip&}+U[VpH ӷ:`3*`fh:]UdtqU:tu/m.Jݢ+]сz0 ]1\etUQ]=G5= ؉+k ]UR:]UztR9'V Vzo+Zw+JpztEUf_`o/tUѢu(wm@W_ !gA+٨f=g&ʃYJFH<09uD`47rzk<ə}G M#@8+Y~{~D)+`+\CBuN~4'';j_Jv*ʕz>teh{DW ؈9p*Ziw*J(oW'ekF\4^z,7ͧi13]n{sBJ3J:~['m-f2 X!Ζ%J/>pt0<&2f d][܂)-:\fǚ}s-[nnjJз]kԼb=3zƵԶ _& ni,ڔPmFV-uSQb`"Yy`W-CwSpiar#<|+-KɷˬnY3~>\pKF(-$@GEPCԪhAGH|ZS[%rI5B5khp‹8:4o:%epPtrm qy2 PHjoHYtmվM8| iw xGDM*!ɘPD9#qOxv 6kO6* piB&b`Q-Ȉ?ˋSz5GQ!D ,Wچ@.ĘHgyT`DL>\R<'2/%CEk& x) ZQgWUxtxTֳ EXKɚ'D:v-4N VxI1 Zdؾ$)0`U)K Ha# 9lӀcDK"o!E,w "\3`4K>ul: H^Bd(3 ^Lʼm W٢1M+]`"툙Od- <'$VlAfۂ-b=ls.QaNc8^%JjfXV:ա, yB,Ćc,ڬ[E1teElM^٠Z\<ڐ!y!YgH!Z:3/cW` SkmȲЧnڪC? 2EYL00aԒ`j>e%Yjbtf{=Ne") 06F4 2XH69 -FR/@;;- [NΒwTXq`eJxd2 ^40JGqTP*fT_LSR4eAZKW;H&r@ 3h#vLY!7r%ŌLAA{v@|Hc8렱Y4M|0(l1IeS5ciT66JAdjʀAZd&[n_@J .LEwF%R\]AսQpY,KPH_(x\PSQZ!uW]Tl-sХW݃.PW`էZQ($WE^8u1YV $ѩZ[4,ꑛ"Ǜ1s+:c"&ԙXp4ukk,3_gZ呜$8_ *)_,K!8G!L?6ô0L|ФbY܂*?7䄨G[-]@:n:B$a% .J{a:2YWтҥ`r\JWQ8SGNjK j,J0F<(9PiDL(2ӲB QJPaj*1|OA `}QPVǃVH Ef06 یEUgeSY_<9Q<|qYQdNI.+II|'uM]pg ޱF9TX]R ("N'G ߁]rm҂^ Z@mDluj ѵ-P Pژ lcBA P*(S`r @p L >A\gy=(g @cb[-6*C PANʀGhαKo,  39($@P~ЃA2ءvGycP)EF WCAis,0 EZb+'R% fcv0]n1n:LTNEըTf)T[PrV-[f=+^cA-B-Ia@@ xt,2 cŐ%`&k1Ҙ{^MyE rr:Pbz 9v)I^gz@U4hep?{̕ڰ(RVGϊ58pci gj F5h+F0x=2byPdҧ D2)I1${H&%oG s( &+0I @'%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R*,Zr#(`tGZ>y%4@Q @)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@G tsiSw koF'uimu/_/ڝJɋպQ]M8;&ᒫGs? `#p鳡pO}Lp<תWZ:\U+-s+Q#j@Wj9:᪳\=CR^x̵ \Ukœ?϶ZhWޱE= 6oF;ڕ>{6aojV\pXOFgZ19O?Wh>[. gpƏ8?R)=|뇷+w\-Ǔs"LsN4Y lť/OvSWf6,hrdOyG`f~lLn:[,a9鼤JںZ5}TG߼];oً1;6+gѕKƷ,>MS\h~|yg7_E7~-\]oc7!DIkcjih83 l5O16(Q[n~ [%i _UkַRForBP>&8xnݘrهfNvcU-3Qr.ug-dX*q Fq&*./ON?)>L][⟝<[{\+ »%!P-tS0%BN7O <~YHv V:/oO?:397j =M rvrf]]v0㛿e7\IUnǽ^n.yM'x|EVV̢P12mˣڞ.J\Bi,ߝ_!TNk_2'z=&缹 |$CiG/}/GW..p6՛ vqf^z@l!nv퀆Ć=ݎ#$;v1⣼6v?vmɟDΦWyLIRJ,#2-q]}Yk*ht\M`wjl^MOg_Þ^_Պoor?zg]J\X5ig/0^m?͕k?&Wc~3jMtQZn c*!M nvfWaЛ^tj/ ׇT!}J-Z3]}v~ʥmL>s%WO ru\\$a_(|/pp-;<\CYnN~]ul>>x[k|̐\o3LS&zm4ұ$dI/2eCm=ƴ)\甏$CVt~cs>NRݘO]R:'hgtݠk?cQ>M{0H8&>&x2n.X/6ɟS0*JjzJ/}!b߉`)m1d^e+dnEi9zHrZ|R"|Dv :65Z3PPmk:J!Zbд!a)̹1GBߜ?,ݡZWQva65-·4j=uǹ,3Z8SZL̢_XFX<VJ@(ad[.eag6_g^!Q'Ѫn£BXεͼԭ '!X9h`bF! SzH>{G#7#G}AG^ yZ˙M1vx]7ᴬ'ҢƑ>%"SLpɆ~M?_AujCJ2HЕ8m .Fėn_4Yz_p|/cc r}YڡEi\7vxXc4|5&i'ng{Ef}ewQNӤ.?Oi#?uzn _CF83V]y73jzlR }WE?c]xy~_O?[nKsOKwv}{.iGlwyZ=Cʴ&tZ kG[4I3gԑ~{ql>حZw#{Nܛjàkg--}f{] $v o t!_v[IYIuVz9fLxժ-㲮]]^oj+V26\M=of90ޯ.[}w3}vCUmbhu(,G!h׫?'E)֯+_OP1so5۾6.H= z eqG M:MZoz>[,ϯ:Xk:*>lٻ6$X`GGKq9up`g #dHʎg)sHR="KfwuOW1Ϧ)OL@]Z[>,7PT+yK͆-}9L`<Ym&[7<0I~/\XdխoEۖΰM؍y2]du`6 4W.w5`,Й6lMd_Hr4/7DE"̑81A` E0lUmYBN..0qɂv\׊(>x/ZڏEy9B:$I@'4s`q;DT ^qsՋ}j'>EeQD!Io E0(RXZcQ!{{x 멂n[:tWĂԴ749ozPTYwiR&$Uwh4!9 E9qAԩs{l\hi#hebjSTN#Қ3G}SQ,bSm^H .q )<68 .p%;v #2a}u=!ZF'Jkd̔Q\mwj}!}~z < cPA0!qk88!C ,nK^$9OUN2i:/C JG2mN'JV^x|e 9e v,',hBTϨ7U62)kj)`)$DBJcC0 jYc;}_ӜwS.P yadOaãMEdN-N-@fF ᚃ_+/'{/P/JU]lZ {,m,7QD@g(Mt3a sڠ9a.xdHPW70"Q .a:sK0qtYx+} J֩xR-zW_)..g7 * qS \I<5NPaRMLxN\'Uz[SݺW5Ϧ:S/ՓngWof zA̍?LUܯq7 UO[, _4"#X?M0y]PIzǓA.U\dTA&ޥN;\ڞ85GlO:x'xmEێ<#DiN[wۙзC\⾟. TlBHࡘMƃ6EV'0J@fVD.]=HxsRc2}Ϯz@Et~ϣbvgYc"R"ur)\ІHEE Θ  ZNg Ū%~( &H%4') b~dtU_Mj7UZ,ʲ*~U9:&_Z3om¥%~7^s7Ε7eן\ *6LjQ 7#`8V5S"<[#g;FD ]HSBO(]6$٤xlq´{o4eZ+% 3ALa(!/lj$ƛѰnn #'+jRSֲp 8yo(Iux~{zV=|tXCz|Ni+j ~Y2Lz\Y%xD;a&vh8HKU60)yH9&!C-ח=LƋ9B 닄6TVa>wG=_rX:tsJ(i*4dBjS44 fT 7%Z\C׀|ajA*,hIk:SaL@:SVPeIIT@'*Ř)&a9 QT{!Td KFi,i#%hq>?D|9rGZ:PUfpLEeq1+ ȐzOiu0̒`\ĈW$V LiV9* "c,o5ELI`(z$T2Fv^>TplWgdZSTf3=!I#Xe)Jmq-H,Xo':PRA ةR!i@ONd'BP!XBT@wT$dQP^IjcAVI@v]OU5ifIb< @[#gJ.䬹ѐsQǥ`b5Us6lb:Lxf;Uh0md+?%a! 6Ǜ M $Ȓ$ Vh[dJ 0fuwկuy^u8h_%!3@`Bs(I1(ٮSr!$rI)y? Nk8T&h HDZ{=CF1,*2)b> |a9$LxaR[[h*ӦZS`n'Ϯ٬wEg}m!Ug]$JFFꋔ`hu{;Ł1j~g>RaC "p Xol4F+$in-2j;,} K!IӠw2N$BBrR8%*ĂCL!;$MxyCB^ChZ8. åv60LfTKQ25 D4VtXT,Wc䋯!N)"D%,J^#QzȚP*a2Es6m+r@Ь9[z)XtܛER^[:?ieb``R3tny:F }Sx_vMv]*|  ZGV 3f4j| Kba0Tƻ'/磉՟Z2e+LelMFK9 *M/ƾ@y4>|&ڥM3 >E4ˮ> \`<{}o[rmN>N'o(kHorݒCLޞ}O'/&2N[Kn%lȝUM[3;|$UOrAi_f\t(Y`si|DD"A3(g`lh KUe[T䷨oGFXlpLw$V2gg:x9c06IYg:V dKտK>Ů3\w8:" ".`ae:F6Kbd1! b <* 9N|0a rڸr]׈芜͂NER\PKz{ge.!|rFLZ #|V-g.sUNQyT%109 l N3TB&+"C( 婥G)0r2:D?vED1"{D%C gK{J ՠxg-pC*b$ڴ 6QzHL8D:łK@*&LR#1st ]?Xj Ygg\+.qmo=ccnp ы6:r-؍Ї"\Y!I׎RʢE\<<;vC1"Q|*{?G THMDnƁ,IRtK *u_,) \%r5=jU \1\%%>*=rpZ}fѺ\q_T Gx]=-X9/NM&E- r`Ė F|r:D4Oo^hGt̚xЏY~Ǭ6 wens u_[pfqo{u~=%89Qh0"H8Vc. 9)VS,uGp uFDg:ͮA rH# be}0z)hD b Fр‘2&"qvٲFk)R[VZ>#ےj~-v׬} @N<3o/vb{_X휟qv@K vh*2HN0L7ʍ8 +6>$`=0<` [C-`#TX bRcG"-:%T8 y605^ YDK9 Q "CglGϴ$yZO fQMK]KĀnSw jec9-"Hy}+O،gg?/q=BYOϳt*vg fOnI|1_JgIErD^ra)yt5eLA^aO %EATZheށ4<|N+jNG˳ MW|>s:m9xu|S2Ds1^|JL/ZW|m=ױzՍBňIвw埓5)֗KIU}lL6h#pK ~v_j~@W3fݲplm-5>}gcd4~$P͟?S ~O/fK5~^4~/ 6oV61dϞ_˺EҤjdd a90V2d\V+JttvW PUSZ3XOO2gy]ڹ` gCGl3@t~q6Y36C?zTl_$%eL$P`/yM2fΗ nA<<7?ߧ_daR,zN=V^Eyc:&0" 66}C-ʒ<,hfEA&J>{S$i~wu o5(>\\|Y!bx91yOD+N0{y1/f`^Lϓt`*[78kWki)꭮~fP٦um/&ipg``R_Z'$a#6U7w$A2R:hGt>uBWI'Xe,_Z>=Ӕ,?3 icӊzsl6iS?bBf]eV{CΠNxd-J6QeIt:6_oyQ}z[μzl7avXa}5sJl{Ll4O9.1*%A`r/)7{mwtq`NsSoI%1l aQ"`,p$暳G BA)+b0c? 38=ZŒflO+v;jTpP4NX,E*7&Hb=Ð}>vR}u:/T#OWŐj#8#"m/Eh 82sY ےG'q߯zeEnY ߋ|b* d*z e"siO*w55z ߑ½Uݽy[{⮆ ;Krim8zLY5_!XArRJs& o}[[m z{tvݯQABHN&(أ:"$S*!3H9kњ8*xnZ\z8߁f`xXQOLMZiM&SmOO8J%xB}#eٽZ w|x0] vfOo`|\3rQϵjńlhl1TR$C(ec1ߩz<8wpN߶K/b\(Q;&$ܓ(S1`hE5',<ϡ]s[b+Ke e!k \;C Xia”{mZuƈBWK+ VTbQ)>3JF^^8O`Fz5Ĵ0uyysA:ytr8F[JbLQg%dx+J+o(ܑUWp_^ˢԻg,C&=~_j*Ƴ ;(ޏE;3 CgmNKЎ\bZ˘B M(OQ .ǫWg>|.mb$Ӗ ]Ri˜ ) 5u 3J* #`sX^O1Gꃌi1s"J1" -|#\Վ;a9 a (K&-1 d:UrSCQ!ӐseB<fܗm[>9l;ȁ)ٱ hO[I$c(NJ:H; +Z(xUY,>NkdKG; G;-\zk3bnbN1'nss|ļP"Tr_A*s,ua:usP 1N24NȌ*vID uށMB!ȐY/ipS;i҃]eV m:ߤ>$dV$ERqi`Aip02Z PKР4|S s}4pMyTN%s@0 W*Eù!H!Q͋~DU'ڿQl֍tv.l',W)1-T")NVieDEQb?4^%߇&$RDhL2(i0"374;<(92!imA: R8nDd%{+"\"`$|){g, 9$c .枥ypQx ̂tIoz"rgwQԉ'@%L[FUx>aVIxu\9upMԛq )ԑ(l_itT8 ^ `ac{pđ@Q0L]S28]ۣ5-GVwQ4rXF+HJΫocm(88M#q?CKP:Ov<\)a{=m_ __^/>fPRƣQ**r).b$?"dH7OF[j4uonTyWxss}6}9k]畹t|?=w} I[~νHėJ2$PKl*F4cY,Sc7p4-ppq;]YXjY#JmlEq'$bf2RGQcSo{e8G7__gqM?'u/h:?ݏw#qB=pm GMU0UQWE?6/za&=h*oVDxhDv&bMn(wȻrk2- )Koxr?;Oo?>lW=tW'& N#Ý; ͞K4hP].bgQe61x>SF0莎Vyn9=4%åYŀ֠GԻI"Or|J>AVjrt-:^IHO柃p!2*u_|Xǹ?|Y"~h9K's %G*kҎ*WMjFUGAD\~BMPa/52-ԡJqطRB8[) 9FŌ$3. ޕ᠜tZ$ȼQZ=. 9b@*s}`du^HCiKQqAkd]>.c!-gq˾Pbv }pl]0wg ë{«ZÂFm$J-l rz݆O@C$TkP*Eb@ YdNNdɒ%Ys~1,1[j;S:,-H/$BFUQڎ]p/^@X 6'?] "8vͱCG{BGƀGG2}ƕ~I .Hr1_yE2}zA@"TH=5$RO@c' Q =s;a0eFՋ@3egVJpoֈԅk%tU іMZjL3mL_Հg/IPM>{>~vn_u-y/MĮ+0*W|HdBcgO'jd% x%gfm vRH#s) M?WN&%VY CܦS-2$.2њeetVyjhxk4f <֦n Qvr~u0ܫ5llV~ ǬtY>xγ 6@]xWuk]:}q%Vm\[Օ+@h}%sQ5_5CtJ]Q%u5I>{LiN{ΐRVͳ]׽_|̧Y6y!07{͛ ov6d< k7cwkϧ_ .SՋ8_:'>qu|Oܢm\pF|vݝN{>]6;qs++@5W1DP;#cq'],kf8 rʊػR[cc w}i#@i>Vf׍}`dfW2r7g4 8)?-d⨁ CV9`cѱea5Tl9-=V%8XGNǂ$a*8t6} 0RAK*Z֪Y/@ }M|uqs\)&TWtr?̒s\&؝jq"  jRx帐A)Q5fOԁG"wf̔wzx0,-AӴ1!B1(c) n,3kWvĹ_C,pԝ!h%J,oHU?5e+奋!I +W"Vd-HA4BQdgV8#;EtrHRT+P0JBT@ '\ i.pFi !bSh+}_ۨ9e\(N()2hsي]V⾼GOY˭媋Cv2LD@6R,TAdL91`4M̥c'c$c4E܃}@[s7櫕A:pO@""+ xxfAGčrmKlui ZAcl啷U›8V()H"*(#"@(P6P<}鞻ټPId 8FɎ ˫VI tN)9=X_QTޜ1@&qɪ hbf6 AЮ~\YhJ!$H~A'S}&oQ!ǂA(dTfs}&_En1[ӌpwʄSpK,jv-Eܲ/{ǟߞD#v V#*k)+p0DG&.,8ۍGa jӎՂ= <h9cR&h1nF%" k5ic%wFD^uhRYt[ZЋ<*Ě,2%Qu!@<:la<,&f ?ͫ_OxK̷X8FO'>i$xhgaż~2//͗[HA`Q~8>=-J.khY{ %['^㳭: I{4]{ ;al =Ҝk JPhd-pZ^kfE4 ƚ3i78X^*^jn. ȉ@(j҂S)Jһ=N#/\!Ο,wI圻!cS @jSj(`B Y\%սӥZxyQ\^dq99[WYJU!•CzϜ`*{8i1YZMWYJ \i˺B%9Y\z0pe@+\=@2pvcg]E}Csj'ӬxQn[L-i&t2s)\["e1`x:[I#xIޯz(olNN}76hF OZfXnG ahn gYj'` 00!K`NP#!Kʥs BCY+J*[B$.}b"FwPT3)!KQ娀M^%£7Ljq|Fe ! EkNBb8>I*d>TY =yɣYn"O:#OJ*J 4 *d+Al[?zw|50r83r GcDG%(QY4; uh9Gx#)ZZdu[$SصGE-˦Ś5Qp&nefq`T^ z =q^?4ucx3Mp}߮~NWz=d|˗/ѥT_]]) or$sr y~v<-G4_qGVx!ƛ罄LBLl848xb-介9[Ӽ9 =y?m͸> No͋l+ݙ^=8şNƓ?;<g8W80k'[A٤hM˛y\^ͯWsltQx۳<{ sJ BLJ!1pH$^hv<4?M;p^Or^ך/qC^sops|y=,ų\sxf%7wQ|zwfy%W8ƞ2s*jlluǍ'9nlCh~:7jt ɎN8p~ItxӮ5otQnHfwHzrh)5Lŕ]Z&.wgW$B]76)fAM9JkJ= WhIse]Q ߛB{6:MAn {aᜮ& Y]8l4pc7ӥN?.z|1p"-JբknbfPz~2[3`:W=|Å)n;e1+,DGYܹΉ|P}k@CX\J2k֊d$NjdD ^|p2Rc4EBӞSTZYBNZ_j]_-hwc{sIƈIx\ $Ys>PIԁĕFF\$@Յ./ /oV] 4vQ;)4)*\e"z IzGpC" 3nm淫lٹյ(YXu`1?{ѥWTAwSo[zRa/̝Ѻ-Yc5ҤMjIF] jh4=}s$91 /܎.zf;2vr1)dsF KkΨbMEhoBepļ,NH@RHwdgՎsrD&­ϡGS0$(&Ζ.9(̒<x㳉kRǖ7w_c/NcPA0!qk88!"nYrHDs$-x"Fg(ku ^:-!46(&DJ CAk/ueRL. 9].,GR!*dgV* IH&S RH?h|],X,gs? ׋5N@'C%> #nm* =ER\4y W5+-maKzC:UJAvLYkK؜oQ~s+ '0w^S{UmpD`eaIԶh&!PI[XdW@0ɢS`uLT1Σu DuFtcqhZ# YX˱\:=MVA RDrET) J9k3Ń;5G+)‡[1^|E,51Io/*#SJAi4* ъR+"Z\%+@ܜ nѿI\t@`V1;Q# E i{eXX˜KWNDKIUw߂IILwmI_2R?_C 8wgD*$e[9~C2I␜1G4~}]U]]#J? bU8;*QRMQn#PBw!܅:nК`i{ IL T3J gѨ)D#J]LIL3)zM\av-i8%I ϱ?ٷF+M .FK .R TE?gWp$cVqlᚙ2n |&$c? zH3m=d@I!aPN9Ts⭻y8oɫgL7qVޤ;DpԜP`\ |7<`eY=Q*?R/ϥu:>dw$D9lU+Y||{wQN%9:yjJa4NPaRt>=W}hn ԇQ٤x2ᯥɓwWo%`8`n^]^My.G?ֿ|oֹ7U5)&k]U []Y]dySx$vWiGun{u3reuUllHF6NHX`ӫG1YszF1Ry9e)]zNǿ }|wo}?R?7߿w`O9WAPfg@_U7ܼꅙcXߴj auMu6yEK[*=?ry~7OOt25Q#%+u?dar:+FBtaPRfh#@!<) .cib+7:;q[ N ~@a?m[h(P&ԕ٨H|fWy˭ 76L)//WϟM45B\]P&fVqFPڠ0PwEwS˙N"6'Vƶη3otrY"Vun;q~頄ch[+M <Xig0W?ZCvj %p18c.ݞGdE\Xr{gRyu1Q^}WSAirv.-ٲZ}R.;1O:y-M-lnYĝiBY.&~]/'?_ 5}ݕ;a4! {ӏU~ɕ@˰SϛY gc0s"KC/0g;y$ߣt/ 10c_EHL;KS4}wA_c{",ۣ n)lJV0c4V"xT͛A,jϼ:~rw?ߓVdա e`jxZ@ny5J:}?;Nw?@+0iN I$GSPeLhe"w BEBHLRVXt| O6^zĝnTecwJzCnW2 r!%/zצv pAF/#`%##^QR L:MZW7TN;kd4FVG#.RWht#EUxbk^y~/ubUٳ֮Ns\9.)&j,YrP%nÇIoka>QG`z[`?Q\rp\j";rO*$k q*$k B2+QD.΢I+YWZX)V@+v@7ȇʚc\k\ "8Tդ+ 1H^,g;^nYޭEYi$acHʪ㟩;!X/[;g+<T#ɫRަ dINʷ!&mBYc&t1iO0&-ZDW UkD[  %Kܪh*Jpek*դtP.%t ZDW ׺*QZ)NW ] ] ")̭ W.챩yO"{`+tF- 2". [Wj>9T]M |[=}N,]6oU麩T`o?wZArιcl<P;X`~ ne.Gd|r-@A2\˝?i<]Lپi*0ݢk[vXt/y(ZXG E g <0NQ~(o uD 5)`eqLs%n)NJ JP.r$F Ƚ̊hXb$6YB`՞sWƴКƛ%OgZ8Aӂ4(GUEk*5-th UBɺsS+M(gEt Jpek*Սttute0&e0Jpj ]%`B;ol9#Dzt-ۋo+YGPRҬ lNW]d)ьU|lepKR# j6_*֪5tBH-Q2 OY*>pMk+DK)k:]% ;: F_.mt*2xtI-WvgT_PrA9NSEDҵMdNpl M'ReB);iZRci]IFn¥y]%' %lvHW1.\&BW -UB)+69pA뢫=G !ca:: ]q%`htخZENW ]}3tE7zzF8 ]mcvhEW[l7?݂hGW/zʙ:~ƃc'h.UB{[+ռEtE[*tPJ J+"BU{l[=R-Q6,[gGW+MXGLpz]!ZMoJ(i']"] /_pu TpWzon*&hٲ1 ƜزaR4.ź}L]fjn*%Y|$hna!`^7ʭ Zjtmv`JAL XsB qIhenJE; pӺMPlZtΜָ$?g a] ])Pr+]!g6B< `J}e $/Ƴ Xf!G1o5%2!-ڀ,쪯HUB+U*ԽwJ1:5Ϝ3 Vʞ!]2'u$`Jp3vЊ֧$O|q=8{?z::%i]#tujc.%XUHtZBNW %e=]=C"iju03txW CٶDk":DW N0HNZNW %Q=]=CbBqB:DW F++ ]%Js+4c->O <B0@`;Γ:a:D T<tBuOϐ$RV;Ap+:3Ő*vJ(uU>G ٥QtǻJpIgUByOWϐ=Ҙ]LCW .wO1$m;OLwsxpZzJ(Y ~>tEzza 禫33<%iY0H+թ]Ӕw`;CWXqIW*%tP23+r SD:CW .]\Rtj1]%;`{DЪUBiOWϐ DZw0f3trJh%j;](uOWϐ8\f &\p¸fi.vZxRq-Dm?D긆U[~PODӎ0U#Q%*iQ%GuguqSߝ:D^&\ܕY*T߳<"RI]VU7]IwߧtecQӫCͧ|Ue|&q7M]_o߾} Q,ܤЗI?0`0T 3Jr^˚kěkSj:!Ѣ(^o= $< 'ߛ~.^s7w^})oƯ|}Vٲ '{9M13Q|s}w= d5_Nz%eE{/T|<$pXk' "֞s3aAPIU{:6Dx`S. 4!r&-@9(E_kC87 #$VPԖi52pb6즎ZsեْVwZ-{XvvhWV%q ?wWe&]xeכzd蠥:ai(`|;i쪦tkSժ]rn`(B,rHW<$F}?ʔjYW1^A@EdH @KF/Lc+fy1J8w)AGzPAw.!hjd.̳[S eKGݖ1rR7X-' 2+NiORTj|Gֽw&WKh*iOnҵ0EFÕrn _#H 17<"#R'{in6mf=S'a9'5 #BNY#,w6JK8&;Q+33qαqQ$,m7,ntNL`B) )jX1c2bp4hj4sNnpJtOr}a>|\sqn݌d&Jߊ?l {aD\ߜ!h1EA{3,+RVndJ*LF50$5AZX,@vGEwo> @IB9U6RaEm ~E n0-D62%r FB^GAͤk]?w1O&a$j}}aRwyj+q*WԔl-+ri"ߖT%19 FBZ*bD!TB!ZcQf)zEi[(#'8>jK ^XP8331854ƅIƉ\p\§ Զ"yƗ!-oÀ'gw37/fY 7E1~W gϜ7Y4ci  EQ%6D΂26U42 )3@k %6N0p$ڃ)R:хT 3vcp3|%l k7&=kl7'f Q:fP2"]Jg2μhC6ZaBg "d 1( k|@N@8Gc}9l||X)o:!ck)8IÌHzFqKxA[0[I=Z2sJ F\ڴ 6QzHL< bA% I&0#6g?#xqMcIɩEbϋ$z$ltR[1Ens>B Ey)x-|ؘtʇa>!d\1O&>Ʋ]ͯ#8 MF 'bj̥T!G9E`m5uRqǠQGiDt$ˆ^FcD2Y+냉KMF#&ZH0<GʘtI9p!4JmZNYG153|]n& ߇]֨O"YҼo-ڕ 23x)1#MV0*8 ;$@ čƁsc"NMIOIWb&b=1<b [C-ǚs4 kX[ʼaLHdd =!"}mpQceG! bi1g;Ja@Yabh ~bYo{Lj-vwU*q]MzDp7_`GF5DgN<#ˉHa=&P핔#Pdcl H`("h P'fg<_zjgQ$©[ RH8paSiGr{jE41DH$wݸ"oRypC@D)1p;4PR ?HvN)X+aT^Zߘַ0S7$j\Ke}ZGCr:)j:`Gl_jē }\gc{4gڧ]*5c J3,DhfpP0ki5K,,3 탕.ބIL /_dKb2\S#`Dz;>tT.]Ye>Gռ,.8̋l{#c'W׋a1_*TQZgr`؃,YiڍI_}ϖo_ffӅ_f̚¤SfW@=ůHTElټF32 4, IC+sisc̵4 GZ"?b}_`ߤ?AU0We_}r$˫J*3X]ۻWjZ](S|ȌSp$X[yEʸT,L"~E"=%|_YX@>ac3e=ǘ4VoJ_iS_} ̒8׺/Iq6 jjf?ǓO1hU*JBq;dدݵp> 6Faf^?1UpZM݁hVJf{|XZtxbCZ ^^Vʠ?LyW5&{0[ZnYx:_hiv 2@41я?{WFJ_ضR|32[ƶHr2b[eEnrj,Uv휈OoEu (NbS)qЈGz!XtLӄ~Lrcu1+חvK|rT%Es8TL>YBN~nW ~=[؎9o%KQ$cNp)dή@%Ii+F\$@ovq_vfoYԧ0B(ckH[EpCDbAL9gO`rQWQҠ`1CN^Sݚ-=w}"5 @tsPIۃiXjIAT2T R cs(M;c%p|n+v)Q綔ݸ;Fb؊bjSTN#Қ3G(w)6/Q8 Knϔt T_JQ#78(8N#JvV[8(Gd(r=! )!Ŝkg 6,(OF7c[\fN Dޱ7oiCr I[ p=q▻E"$i;] i$FY#qo 1Ay4!R:(M\XZ{[-;^ z6p#Z`2gV* IHƒ !X HHeqf췇yAzwS.P yadOaíME\T!#[Z k|gD ^Vjjlbkpoчo|n3T&50D907S{tŶsD`daIԖhpwЛ㞃9NA{渶bœU\YQ&,1#q:#$Xw渹6- !Khy-j7|т:o ydArAaRNz!ZL`NyrV/u`c;qQR'J3^%)lTV*\B(y^ !ܗ7Mt Tecbfg#vzP#wTʘUF# 6)D#".&wdΦ&]F" 2QB9aLI)v2Wx2Exu/MY{=FYuc$L޽q|9nq͍z[SSgZ~|3=o>xV#(0W0]nl#p4# ^e@֒"-XU[3dysxO:vWG@O\ :gdkouɶV[j4[$wgk$7,pe4I,FjbT;W*J%Ήrлsߡ:z^?~ջ|2||#0>toAGˣЏϚ˿oLzdެi a]M]vˇߵ[فo<~aa0g^ՉG>h52z0  7U,FBP0UTfodGAu ºG_đno#qKi y۲DCBL"3L +wrzcލ 3WV!' Q! *Ix290 `6 R dAڻԟtrsPk:VcNvNgwy)vf/% x'@ nm$x(.ΕEV'4J@fVD> M=Hx{Pc{eK|ofXmht8-wm^Ahayl* .Y˵pAb" ;c$: F8Y]+.ӑyaZ)Uox?!SE>ȉ0>a1ry7? mo Wz6܇/YU6kG'h˜)6„񛃳ZbjN+ ?m Ҝ սr{I9<hy8)_4"P1P㜸iQ~яY" s.X^93sLzũfs`F0m&**nM)ԉao}aXY̿ ):`1>cPe2ڗc@z=Z{ xIdp@7XJ ^ԂP(4ČќZR U$5SB仍ƅ@sqcc I2&BYH@xF4j1Ha]w>qb8G@7uի_8~8c"/^'M¡F ~=…}3N8>N\#8?ã >*;vv8)Eq뢒LV =*jU{=q8?+DutZ㺣?7 ?N[\gVhS>k/[y>,㳻k*h~돪oϗ7gᬁً=-x8)6P|SQ[Mn[Eޞ ,='}r\l,nG7㹨V=ȷpGzī7[)##Y6`QY qcuDl-, i[g]yR^ʟez)Kv|\^}\)mg):M/_Vވ.f #.q09RA0LJa :W&eNeub$|: ڽugLKlV4d Μg=O?(vNK!Φ]\ʊJc.ňEmK'AQqzF{->"%!A< %aPU(2"ާZ5Nvv/W԰`Ă=ŰNn{Uڟ|-ͱGGO v<}&q.H,ּK\c%IIO=}A"oH$@j!8!c,pZ2c"(^`,K!urfO'~>Z і8ZgIXg{"zA.:ۮ)/}!`s\u J`Ї={OCX/9MFSĂ^Psp}@lXScP71<nomR$v3S£1UDB0*QYíάڙ8F7'6_p:❴J7*up&r{ *or;ƫ2 `ӪΌm]xGӪN!mm,Z&#x@h}չ =9F1OCt,]btY^_=p@2|YU%K,Y6uj g5~wJhM}y9?͢y_}P{~nRD2OYgE7r9H+OzSUfWe8[͟@`=7yN'>o4܅xwA*,b`jx Z@ie5JRp=w ws=خ $I* ;1 RLٻ/Ii/JLVXt|X.<ɟmܹOǃw#34jxJ{3=,}"?볳nצv pAFOiu0̒`\ĈW$ha5@bsG igҜhښ"uI{S"0T=[\T LWJLS[gz渎95l=|"f)sfYN6}y[^}mqV"RJp\j"{ # B=8 BJ=(prJx@h ;TI+YWZXġʂNI@v]O!Ys` `ATMZ pA1L Dz gzrγHПi ǐO!Jp0ֱvc0waf ǫOWQ0XRlU0,K١ RB0[ 5;jܛGCBh''Am(A7h|\Lj֠XrToiQEA&Aʥ9`?(VUď .a:<˜mݴ8=~V|~_i1rG?->o5a1]]"RJn>*=_YtzU_hU&_[U_G @9.^"֕MϪٻ8+yӚ a{^ F^gٚ-MUQԊ]CJU'##NS]yv2PNjt;?oGƾru~0pb۫~ӎ;*7ׂ>mZۗW? pWhw3K&G% kwknX]֚@yOX[X;N2F{zHOLWˆi~(-]=J^SOF҂j~gS?;^h NWz>teM0_Ԭ<-\BWcR_s+&iIfpVp]\ ]11|t5P3+!v.`Y] BWNWeztd=;ՇV?Zdek%k^o5`=Q>AMb{kE4092rn2?bhzi]-5ҿsiGE `/f1t<к#:P> ]yk 8.qGBW-c ]=C 5.(;^zWcZGNW/t *Z'aIͶόCWcR ]fJ8h}|r`O}~p>ztGCAWBWzbrDWՀe)t5F:vJo ]=CDWBX ]9] ;] ᅮ#]1{%Y] jCW[q)t5z>v(3+c]]0YZ ] _tjt5PjxgHWJdj$2*L.Nϸ=qݜmj龺J5_z|Qoǣ&y㩈A \C ʏ+dfikU'ND7oupƅ|F[9B\]끠*_Z(i4"؟n`|9mF}Fy] zu7i5S'(KW\H_ s\r^wZu*Mڢ2ɳLf7 hHZ9b"65e3w.O}te2f@,g'>vf漖:Ęz-V?BLEe1xP\ܸ*g(ˍgyiAEѕ6<В=]%ճuc7$5|:tg ׫_Nζ;an3pkBot&${H6cw(o޾ꍂ1? ]pH8:XyOۡݼQg%Itngl#v|p⩝Z?(}~w ]]n{*T}vo?'OtթGTG'b* {Lq&aY^ͧC씟kď;v?C~V9d%PIL.Gqlr̎B"W(W!qmC0C}E_]C{~~nym^ÕlOT]Bś>R2B:{0]:GSk̞BJ٤j#(ժO9W[*xc%TC0lRt䥳+oT$s)V,^oɦn2L*9 b'J (nN2rZ,qI0jWJ=)jv>[7 #}yoj*zW-5w5DK] r%j#I͘#jHTrsS3[ ٵp.脤H1-^rJx>фd(Q߭hubT=cƌm=@=qϰXF6ي06Yʹc0!w!0Ѩ*T{}0f` P! A>uh<J?͛TڇC6TeGd3<%cNBk}ԜwI47-wqZ3 $m9ӝH9YC%x}EbK$nE"NXRhQK 5obm-*&FXK>`DmMeЗ E2̬f5z.WP2 Dx־XbO b(t)Zzk,dP)13ȸT'H`ȒnB,AF׆R3uiB$~[(_ "O *. rN3z3J^-TTlPtԠ-;: :^v۴j%A$UCˮPd$0BJǚڨn8uV6 TCSaa+k.Aken0\g$6bQ4o:tZ{f@ Uh[%vtZ[Ũ8l)S`q\b ~NqokҵtT0VjJblH`W`\d) @pPk@ouP+*әPBg d,q <+dWd%TR ++ ȸAæ~cHse ̄A .݉hPFt[s : 5 q„1!6KH `4ऐPg*A .LUAA::NBaVZ hRCME8E@)MA*x5b<;"K` ;lBJ!u\F+ބIRfd5=()n<+VX3:r 9qVo@B"%9τ@eR"*`d,["1ޠ hC=Vczh2&M ϤA/ho8uhN 7cE͛2ʉFQQ6`jrnnظ|=.kӛxoTAgW {.0Amz`-T&o b9PT8xiP)p!Jr Re1'zKf  A9AJ$rDOk l5$tqxѼ<04'e Y%ue@vtm^ ,\UBN5~t}ky E*NMH7YKVIb;> Ʊqxl_^Wdi&P詌6%V,XBX;K)GQ_}CHG؞kD0ePEzXKv %hB;sI X>:A*rB ;Vh QgՌ3{ˎ;|Ϣ 2 ߬vAVÑ$ᾠ{80.!̸1U 5< cع;_y-\t:]_lM{uzqn8 gt,- PWwH7j9914z2\ZHCI. LɊ6YǨ!q5Hs-'ho1F:ogC#f'+q  E yآ5\ ty݈Vmy0ўg-JtܹSAS!L3,( =-`oޮ7̰6vE uw+Eq$Qvf!᚛4r?E bZ 0p k-* Yrf$ޡ0tyT,5H?<6&RM 19ksAf#Jm܁TxYTAldjP%mIBL@hF.d@ל:&gz6D;.qM*BjlpgFT>$ZA4XAVq6+f@ F\/dO34%0h%#q"=-֓8 Nr,Cɵ!3)tk!wg…0q͚\\l/8 ˡ2fH284ubM%Z ]\ 켶Rɸ;y0B.8cT~k,__/.6۵a uki\֩Y*үK1~-5^cCkz+\e;Xw}˫ӟXg޽asm]ǘ7?];ݘ8w9X8 >q~o2VEWߝnԐz'Z_Z.s^]r> y&Ցӏ\d狫 foW$›|3/#.n%s59:<;|W5 M!mwm9b۱oVۥZi:>:"۳Ԅ?t"ߏ t_ңภ/\9/\={+m8 _>,֎m[d#}͊Yyh-_ OġDzˀfOUUu}H WAp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WH:( 9ѧD%;U6,@Z#pRZϑpz WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\!간+XZ"\bMS!\Tc'\pzlHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! W{ZqJ+ <O憫,$GO)>Kgr$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBpF W jFRz97b1EÄ;5ڃk#S⇁a 'q0#*9 NSf9u's H+=vug`ѝ9tťTUcWWYQWkce;[7i /+2Y }ۏ͸ #~UFWDX=ׅ$B˲D‡O(MΡ}en`;!ؾ_Nj|bfC+ޜ"ټɬ J{ɽ' 8jh$2)^ۼ^])jx}/|:;Y@ӫ9{8aح8Vo=Uzn`^mR]]I?_jEY`y:RWSAYZÏqd)rT!zagjgLWi9r ۳ݤ͸gg/"$ "XVؒ$Y#eb~ 0ڼ5}fy2zny4X1x]֏~OyXg;Aג캝Rl^"!<2IEyܝ K\W-%YQ n+sXv^S}~ݽfXjtctfU^N*ݴ.KM%V,*.)LSWA*LhM^O=|˚{mx=u=V &nP|ź.lG,)K.YН-5wv,U+%?9HUpkziT8Ie`)Oq G\aT}l/[Kmns *okŗE`BI s~nuݷv9{?GUN0fM7`/&k$/ݬ [3=,ژzJ?A^̻x׽vpMgqʯ2FڶZdt?!W=VleѦ紕[YUǕ*Rsoue^:0_}uWoez헰z`l D{ `Gza&i܈ퟶ5I]d|wNNg[n8l0^3 }x|sM~@%w͚2ap > [aRE%՟HtaX.pGꜧ6X>͚h-'H6gH~9UQ&l0,g\Ӓw /lZ={mr撍o*sN:s.:~3- xJ#_2K }=6}?,*v=B|wuFKκƼӒU`SvpSrnB`X@)bR_]BR,^F@e6g ZVl~i+EHJr TK^oJR :0DB$ ϙ+%Lߒ>Pc |h[<$:ߣ]ߍ1Ó騘 St2ɬd:Ct Ө穉<5&vA∟}ѻ[ E8U;͛IWr;\f4>&B@ۏJ$U%6rKX0i D'&Jሊ2QPC$\mdNaB{ҚLz/rdʠ!shb,&} VY%n>0dVL(@Q5!EbwFlъVL Vnͱ1Xc>K d$E&-L;#BmZéAbf XyZ(Fl↿1bclLj% }Lh>-X5_ի"'Wxiu7bqج#CG0Qk'Lj>17ALu8Ibj $tl5:aXDnz)IHdH!VB $0@<ܔP DO[X5^ _p {zQM<(u~Z9@ S'si5|Ne^=shbGkB9a"Ze@WZ&ҧ(*zM՚ħϠÛK;-m6[j0$_hфy%%ɇ8A{݋2#ZehlY0[B da68ȱBe13)dFh2#>S"3 9v)z7jc8q;$(RjE&dAd5qM1PE*V#LHyuY C}dI>Hv$!HPGMX7%%`D 2hXF5@ sbhLa'0Z;m2^Z2EbtTF# Q,.C,C%q'QG*AZZ:J0֦,NN  _#/{ 0<` FǒQ rl~'y6CvmrϠcslju0z4Qs;>a`s#rQK'U_xYAJӫ@_)O)O4)O)O('3SxBL|m666B_z/nZ "y%# Panr(d*{D .\|17quJ.%%Kfڔ)H@Z*rbhuIy+Y izc^C*.~Ʀrt*rAHN/o/[~,A#v`'Laux8ha{fשkv+0 fTAo? \LW6*LTrR\VI՟#h{QνXݎu3 D֤[&q)Y7! (r*תjy-u ݨ7ҁX;?|3lL`aJ]XXrvsonRd098_gGun4X hp_e[s0+\uw{é]aΆau}(em =i^XKktng!76J[MUZ?lUw @JWϥ;{-l17fU!,d1W{OMnШ.Z^#EaE !))Ín/c"i=K]TTL':Z͘h(S)JHe55AJqob<[xeqp h:w!7jlv3Wв!GuKb>|h6 Ħ\F%`AFh(%#ʙh#ё.JF&HJ1asx]RYzURu%.IVg-}42e,2 nL+x%m,1^fNv}kkMQ{{%G-Jm:upGi~i"`WB?V,seV&LHv&GgBgFB 6LC@;oy7]`A_dOG2 V-!S3lf_uױC& R8Ef2-2)L憄Fq dH:%Vܕy:%6;a#7&z"`'Gƚ>L5d)*;&sk-.\h)dI WBu8MLux/Icil4t Y%Umۏ`xF!@P#κ(-k!˰ɸ+@0g}ɇ0Yd K9jD)t,!@deI9ErJ8Ǥ)UZ pOTju75nwK'/~0io܍Bdkp(ɱgGĎ!;ib:%9jt=T4AO&0S]ɉȐuD 0[{ NgIaI|[uC6-qy`m+C h&!_T0EYb"Ι#d XJkօ(5ԋUpCyy,oӨ *sbjR=WS'7G^\Yoha&ҫy=S< *wpz*~N:wHF?ؿ5b,-镥v@ݶn1T5RR 7lB!0uEY,4:j1׎s_g't v3 5;kvb7fƂNy >k/ ͤWD`S OI D> HI@@.ۘu\fXWޟJ&,Q%l <Lpy 7uv伤 [L[\dfb3"#/?D!@h!Q$F"~WܬpҬ-M3v!vgd۫ۻwOW);;EC!3dr8#>;ì6%B&20G%pNHS1{ʊ2&fAO Ll.E3Qq%dBpd33qnFJgX WYM]z,%G5fjړL38 ;mPpax0b4xƹv]@e&i"13\% !YA#'J!$߭C42uC1\dz!EMtP@)ja#%+YE V2 t]+qnFl?sWPѣv`&eFMHkBmuf)EV,s cbfͨ ޔ\r +:Ȥ kbbA$LPD 1cdD*1v&xv"E="^75@,GoHKqF-уGwZ6pƔ6+e r2H*)e"$KZNs貮ĹE%>'\.sZggdS\Ďqz\q@#GsQ%&d7:eh㨈#@=6pqgʫv5:6C1nXy a5_{?>#gV2vبYezqkmA  uJ"FL rNwIpQz%'dYlfP!RSAG%g*Kk#bNW+Akn<eL>eA`N,0-ubm]TN[I ,DNɔV3SpJq:aWb8G(|\rWEp"Q7$x^]mqMrַ1Y6{D/] I2,+"9eD5LE=st+L |RKq%hj 3x)1ZMGmyŮĹe k.iJ;f, !$J&U Ioy&K4&=|eҁz 2D9b 2s4N^x#`LÇȌ03[v [RnWGױ]1^r#e X`Adr*rc0/9k$sRTRȱױұI e|W~պ-еVYf1_|@Uo-MKsʹ1 m!Kb2,[a`6EKDgy53z_/_9`d!K (ղm4Z+d {RrJ(S„L쁧Ofoc=kGQ 9qoQȬu `,WiLjo鑧)t݅aKc>yDZsKh孈Y0($۟F  qr&zLwisӄ+}-82tPw"E][vGfa;_ċ/]+ql,U ,1`L:KX WdH;ft4|jlBc?8OVN}m`yѪ_ Wϐ/F)&\o&(^WΏe9d9\2fazmGpdzSvlP,巟֊Sn.l%֊yk9r;KxT14h>xZ./;vP]'*>( TñwCW˦kߜ1g& hV^\Zy^tj_TiT~NrOmW'7h&E08v'.?_}WҨs#.'ƓшvVӏҌ8XŢ5^e#de.kJիUh?@9U[$ h%Qg17Q^>&<\eM$B#~͎遫u9&5m&ͷҞ},Kz7fYzgڻՙ mQƣE N1+W<-w'`cu7ԢQ$7:iD?q^s8q>^Ϻ_k\,|q.9WMœJ/-TB7^}!߬6.OzAM[9 Ya]S%blu|t4~t̔-0NWxhal4lMdr~W[䋆[isܹn7d;S|0miS{>^󠥱x`s;nbV]}k;,zn9s#~67np'J8<ĤYrJ0BKt!zh:YjO|:X9nJu9Bff<ɤHm])!r̉hR)LgȌK02x{ Y ;^[^X٬/J%zRK@ T_*zX (1s\g<4\[\cr_j_|7-9o$KƄ£TeLqXV@(eR.i#d.Ft(6'w I~'-ly4,S3OJg-iV|dI<)ДY3GԄ1Ixtaс&E(o'נ#JCMۊ۶ tN>햧K3:w&x5~i5Whɫř_Jw,?l£kfhڂP}O yZe^L\~iYE8O'˓jAfƟ^ YGpEk. \i%:\)zzp%jU#*pUGWjGpE+{WE\WEZivJp 9x6S,:_HTkȋ\7kɣAZ]N׾t'jR[&`wOɸ">5fb0l1 곋7m=B"{E\76p$2#69{Fw>{ q56$;(L7'ҭZHNK%t@18X|jjut{7st&+mH٤ ̺ԁD*rŬ+8`m`dٖI~ÒeeIc)fUܐ\=gI v9$ YBf^Ttm-ozK޼yCǿ] қJY Z9i u91# *\Uio}%t\,yp-)% θd얝]-% t7gUg[]Ү%:ye1`H.0@_ftu]Ym}3ίCk!R_۟Ї瓛*0+i%Yة 9$?#71J9qIG`>V4kݛ6/agm&>. (vA tm{\%Ho0L> .>[,u妑 k nøa7˧T* |NS|8~/qMmrS{Q>|M6Y$Z y hpJw^inTӞiYQBH}_ݗ?|O__>/~N? LRCᬉ^M?^E}<ogv+CK hsֳe\>qZő[3o _P2~Y3|n\>r*K߆GrhsqYd@o;5Q熔87/]4472=V4^S\iЛPIT+Ggik9G]WB^4 7j[M&m[ FWnR{r/5dc[.%jIsMo1Z{;ئhvT4YƍuT+/|ѷ_} P*edve$+͙0[r*vP5PJ}zUǫg~Eʊ# c V&LB#&HN 2t`b !8V-WxXJ!D߼u9ޮ:nĤN:-i3.'f Q%+,>DjUiygp>MPgUC 9+(vEـo;H X-& )GV-h1!;aS4Uv6TR$-  cQ:wZLռO1~VZZEK ^ZD G`,,9CJV: bALHmbFͼrt3Txm[J$JU `^X9˷rM8,Iv>txuibb2+FMSa€uY^"ׅ4N&Galʌȋ@hU{R8#Rي:%=)m8#lQ^#h5|[o3?kϑ1sӽ`ؑsӽ\Îe:st/-'˧w#$VZTypr)J`Tuu}~]UTlGm?~!so,Gij_,xLg,Z)cU0NEoSRϤ}MLp))#@A tUUџSQ( <-ΪAd (P판 )77) I󼛬 ;fVY}jy*J9ʼnGZNp.{n&Y<{i4:Tl+:8%gN5Z'iՕr_m#c43'T6X2ԩR!iON'BP!jR{3"JdAhd*x,Ѽ@$j,XX捒*u`i` E )k^H*zR4jG9E-s5rvH^6'9gb9åޫ.?ޮ8r 9ޯ4]cz`l 0v`*y?=1^=^%C?JXQ[T W%TkP`U2vr:2JI E4Y" KBDðrZ 2i! VgvaP}T{c{qhgtY8@hG}pёБ4؃#:&4G5oL?qkUM媋x/(ב!U 0Wy]>&%%Cط,ӍO%}vK``)H"E-ӄ$t\Aтc\+52<)Sj4"64cKuOa9b2 갍rfәS^[6-r-6W?}7? ACG4C3i;XM? dVxƴŖ^2M3olozGzka|q -nq6XCVB}Q"yL1Oמ5=kܘ ֣*lP\1LzM`Y RS28>yL(}J7_%|oܦ&o2J^ē瀊 ܣנɈ'ZZHBe0'`ԏ՞ Lfم巘P+qزO ƴk_d$$/xIf>$Q-X-b?O>RTnukZd6`wEԽwI@Bh%7F61i#~Sp^ʹ݂X!}ucdRIDh<@s$T:FYPP9l N C6%Be&B fÝeFr{oeL̜2Ll=.EQQAef<̌*qac3cO.\wdm upK.e@ݐ'I18 _AFɇstZxBv\Xb֑(PCU!|7 ^*{.2P|= i+)jf#Kc3XXC$ht<~2k*f[Xѳv/v%Hbu&Vg"dpJ˅Yc:Ž (K]=z !E4pML$( G(8|2Ju%06a>z7xFlj~ˈaFd=#xߌׂ|־ttkny A K 'wm6PB')8Yiʣ,#`DRIQJФi*:5]-M0xqJf&[gcd_^ y{SF!ќrT0lzs eJ2 i\dOyؚҥM͎}P5̇{J>5Tzy?*~M e9C 3%ʔX.MtR028s1l`]ENIp/,=VO9fdsN5 :*=Q[U+diL)|ʬaA NAI km]TN[4DLV^'pp|6)8[|-fu5 5m"M7@$=0b6ECڴ(F vU"k΀ !lRNZEc &Q1g 'kت\ 9kEkl~0o'Z; IB=Ơx"sD彉Cf xf׉"%̴7i^;Zք%{%ͤR8JPsɈ.V'i_]zsĄ}֛lvO M}ÿFiRЬm ~։$-CZk|KzjVg \3_Ldz%?P,K;-{Td~j)X|,PӢހ*Ix _Lf]1GIc J[: TS)dƨ.xɱw\sʸZ?7F㪉;8BM'FT-+nHm-idfT/ |զԦ_d*1 a uǫP ë<_mw6J%Nj7)%'bTt48ff6)Hu4K %(#y7.S,g=#p. T~뉡{,Gc{IX(W)5Ї\ sʹfnQ-2W!FS&Fꢥ6jt/əeTg ,014gB>ffC&FC~|xRM6$GWfDppO7g\存=q⑑x h"(uhVCRHJNC1s=4CqI #Nj3GW}Nz(,kM[Me0s?f1Dy4ʙW}c>'o|Whan+4!805 4Ȇ&Hp[r6 8@Q؈Ek,cT &Y8楉 pU ԩi|Fm/-SUlS}ty5~䮾G_>+~j6>XϪPcM'C.j:uK+D Mcd|բ$D$Y /[ 2?~Z)h>(Wm)9Gb#̭qڇqWů;#U JU/fx.u˿}Vo_n/»`_}F ܚWO \Ų{SIc9¡_\.󠕲_?sYC̚)A4am;v79A"qƳl`{Pwt^$byc4 1iPx#kHoi>kOz>E }Xk9 u]BIb&4`:xrıhR)(4"ʅ[ړ^#307,+k+767^˹Y\(Js{R4ъNJ͊ȍSpRBR WS5^ (˥"9dGH2ƫW'ZvWKڶؑ|1A{%WR]0/"Y 25&uI3EL9A ]"_ooV|?NICCNZ F(&L\FoȂ8`$q4+M9.ڣ}]gdpotuzhIS[zD-Dt&m!+i -mO[(9aBn4Ctew d3+a QCt ]\JYW rvBs+IYaR*ne\VƙA[)0kA@SϊdTDxYRVRUP@ ``>qX3 |65աr9WJ#9i7zsqPculs1zJ(!D|z5K0~~].FU]Y}qvGyUk|Ygsk|FmPN( +p(jY1{7j汚(K7HxIhȥĖ^z[f&yAGQޝ+F-iF5RpK(1dpA֢yfr8 %CJ5-]tZZуsG?GY]FvѥzSبΘ]1-Z~K(d7-iAY(]!`i;CWWwhm-RztTw#|ijV]+DkY P һŜ#]YB+LEwALW 2zeQn1O̖Coe +)OLW;N;5ĜvC6c]zJCt ]!c\| !Bt]+D[OWrm|SJU 3":CWWUt(H+A9Ct)+K;CWUtut%a%mkH-)\c { Whk6toiBi,HS4p M#ZO RΓ5[6 ee9)y9y^k_^Zs g9?<&֟3Y,wQ|V;m,vmvGlYGQlO;\ՙV~DWhq@wK=VBZt0m+DI{:G2LG #=03tpegň3+˔]+4qu'OXZz!D){z:tez0ctwKOLW;=gnhO_u7eIPte{w)Z6t.]+D+t Q|qaI+CWWˮ䬧3+5Tʡt ]!\ec vBWϑK vEg t( JRLA DlA>Evc .քɮ٭_k1am,@[gn_=z@L *e@1f ] N#O-ppvR„`Dc5eZ@tŴhyM rMM cZհ.}gF\)t(uOWHW-%;DWZΌhOz7pste$63tp ]+D+E Q-jOWBWV JU =(e =Ђ&4 ]% 仪hM$Ѫ/ J#::A2D|%m+@ ` Q2 ҕ%Z66=.o3hu Qz=rYP"cW;#[W#@ibXb]:rNi \u=2ZtNwWlmk bBW5޺BS+N Etlupo ]!ZxBVutut%Zǿ\BWVUvtut%2[gFv YR3/,rtUh74>zָJp miDXiQr Ҵbm+ aYE5+VԝUytբEt9xlպt(mptV5EtjO;k\%6e $]aH&g8E+D"(%# Q5*5c=]UC)v['@WC h]!࣯ Vkn ]!ZNWRNP_"C}t-m]!\&BWVЦԴ+.Etm ]!\mBW6޺r+Е\"Ǻ53xtymoMΗE%TH9oͅTBVޔPI|֍lMRc$m*ʭ䋝*6BJ#L7m-1{CV@E[R fҭRXm7h WrQREĖ-_\Fw^\v~Z\ݯ$opp7~-\ \bHY / Qs E [|o䛒<@c<ظ˗~-B?tSXۛkkU/k'PHq+ek5(UyWxg{ÛbV5N2qKjiٮ`ͳ_mJ Q1LJ%њyHS0< MGTМ~Jf3D'ͭ0sPM_k-y*bLEq%{~TC0,3W!FS&F1,՜I0<9S1S)R e' 6=˲Dlz| G8\>!kIʯ^N&0|ҋg%ګJl?Fs7$J2H% j HS&(omwʾ{enFpt?_qRzKx5>A}31dbJWmԺ;U mٝhJy;Oj;.Q۽?fX?w,ʕ~tXۈVq/nzq8]NJ2ZðnU*uy `On`F\aa:ko[Mя(~R>2%${R2ЇLYqFZDzch9 Ay!47%Dc ֨68UOu$ehLa&0Z;mD6Ve- ¹)FGE6i9NYCY]^l:n #jy~~V?}HّQ6@9C@:Y &7\(X.x6[{_N@'Ҙg٣݋jɷUo{Z{rw| =sG Y&3i6gM8gY ,+lȔ$ ]]8 819E3*ͳ\;"$ N;aN}"d!s$4>'a.e7 C0T ZP\TMd’OI+[ګ pNy&ÃE`*ʷd>(rƝYfn1Ef"TZ6L6T8 єjЈ@ G B}J#Ueo~KAE6v+bv.G,)3,I ]0˰$|dGpT\{a$wM ~g') ct< C( q*SoуE(5D6L:)RtH |h =F"\԰`"*wp-T$!{]LG8ݿ C3ޅͣ/8xÉC%@``o(b#`'0XͰ{gna$?*oMn ߷0[/GJGĻ2]ċc>s<"Ԁ#(甇ӹ_ ~blHr* $`Լ0+|7^CU>(Ai(SRn*gC]D7anתbQAUy'zQoA5>8Lތ,~ Liƌ¶1nivo:2D/ `$f}MxٲoۭHo2t(t8My_MI5陼f[X^4=t2^whP㖗u쬕k]7r_{j Q9vMk%_2<1}ڛ.þ~wTx# }1"M" (xf.ziǝ _<ذԞ?^kH\|s(x-9ͥ`. 5ैe ݚN-k:󴸼hʶdij8wױѸgYm9ʶE .8҇{m]\@+Hw|C2?w}Bf҆z1"|4D'-Qy TUZn]&f{|sn q, ݧG@Egi[{8~6oC'<߽;sa28c ]1 Hxs?Mxpcmj[goM2<8v&ՑL*&T%z<⃴Wב|\&aDa"\">uVlۼm6o֠mެǙSDI4ˠD𤥐;Œu͎H *HedUqQ8!('5jRK^nPys'ظE8 p^UwmI_!SD~0] \ 6p^è5EjIJ~UIŇD6=3iv}UOu}quDu+^d:*e 1_} 4voKw|G74aj]N5ʍ2`rРQZ9c rQIֲ쀋`dI+DY?, !hNŪTmoKrl5@DD8jP=״]]_q&{/ߞv+z+'ǎ=;*Ub6-g%5(QTFE2&0TDwD(oJ)BPQaP .eFE(y`9A oEHDCD;^|LhdmXZWX׏O4 |@,I>E}2Iݎ WKݮcQ+Z{vJ:uPg=ߗf7Vʘ(m8:7Ȕh0$T29ciȅ0#r-lg"m.%T-&8EkIsü&$eSt&A]M R2: :SѺc \Ef1z̑CFFcQԮ^͜ NZ"Ase@I12ZJ0!9+ - Wg cnj^Ьa2y E>M:ed|4U4GtQ0i;g'c FԠQ9ƧlsRw1ttq;Yqsz/($ͻey84&=c Jqڤ~ٽ-IzIqE*dg־02Y,?񵿐}Β/kefVF;J?g(WDft1l U ɧ) B)<^Yl}>>W`;3ΗN<ʇ1g75z4Y`w~~^9^'tZpeCrŻKKοK6rilΠ¬uG6AjPKfz; 7lhf{CjPf>ZW7|ŝZhgg8^=6cyjW>tÍ"|ϋ7zMˡxEm+u~EU\)qХVHGۈ.t/ŀuNa ksJq|po۳+q<5PD˽l~8ÛtJs#v+q{ĺ{}m|bDzZxfBQ)[XrrXWU(2CJgc,R F*5ed d+U3g˂ZcМGR=fqbn>l8Yl۝b[FC2xZ5ȂˢނWZ KX)3JTIJ!I.yϘ8f˩ zrTjl[yU#/=Z"u1,\WtE^h"xUhJDF5)(x\:)ʎ.Bo@u]Nܵ-_@$y&hݬ C~h&p4%ueaҩ$tVnRzAnIg'SL B)+)MG2Pv-s6;j:%ͽ @\m(@Jj5z)z[񿼨Y7Hu{dLQ4 'G0M+ǼuE! cY1 fDRRUHS1#mYrAГBb1{Rt9+22nV@1S"WFƎX"W wյpK,V!fz^&NnYb~nYA%sqn8ѕ3PLf`4"~bV H'n(DA/ ) lJcQD>]9EʛRfڥkq4K!桠vѡv`jbb(\6(tXޅ@#̔%+]1x΃uzGXJH (:d kbb$\0D `:@Au"ٌmP_"8 ""C׍xX ⒆tV:k|Ā5p + y3VQlgcV\p N% 2WR$-` %s-s6#׉j$\,!ZgQ+.ʸ:\pq݈Ǩ\)G೶)nCYv\+RJ)pqx(xXmt슇2oeVdx JK(Q2-ؽUTQ!*zZfl J zWި/z-Cɖ NfKOd*^x ^3TFj`1DCu'vO1UX_|m X2`BX*31c2Q4% $$7q&}H s*8IC >e2Q#>ȟquFW.zKQV] dH]=E avͻEKQgH)QM!F0! WmgOtyգ/+PCm@:y` 'Ɓg H!TDۘî@+(0Pwi={0Fvy-mǴiUY9U: ݲBG q,@(Ѧgϟ͖K~#発O~Il uD>淲|f3ؿ;Y˗x^]ЁYɗDҔ0IBT l#s;l dzzݜЁЁ2O=ԁgѹ=OY|N2=LeH(x}FLK{a*_}܋ekr?R+Ă zy j߀&(OJC ~%Hvo!Jv7Lܗc%WKƙ`&AϵE# RKfJ РPȳN<'g &^pp(EVn1dy dzxgo.$c9EʌN6},YbK,%'"1& (7] EZ<0C.Q\0 ˛4*.a?P|0`Z0_5Z3 (-Fb`ZgaYp4!FeSfE}ES ) EUA KTjl!FvkVHbBL>"UjtRnnj6W[DgXPË3h'OxtdH Q fFXVJsJS…l\쀧ױ!ϓšcwyz]o7tmmX\mo0C@ڞ|~wt0(w/{NҰ|N%QxoO _ W;;_>>K'qH}; G)5q?CєJ^64) Dħ?fśyΘ 之ϩv񋶄uv1{ϟ>&PI7Q[lϹ!ʆ $EIЏs8?{F5#!`d䐻[ w_΢%%?p{)Z -ED9uuWU=5ojp-sio $4R߭leG_|ٻ}{nM`uB׵ 4pvd4 W;_9bw'4-jCr?%^ǛB.ܶ MKyakS+GW림 VMOEJ= }.{!i76[_'hdrpq =f<}jf@tudzWGl5ls.j:аix`HMEE56fˋMx+0 -}~EtX_bgv!fe)b㴶N#DDa3ot1wC:}Z114INDrJ2ͥ 2Z׎M'K#SĿ=EX`s3 .XrJ5 Ɠl]y3ǢIgdhE& ÃDz^WGU33 z\NTJ@_O:~xFqeYfYqg+%D.}|5YO >mJ#$U>>jɏ|vz/^X?o >2p_+).u_PE2̵Axfd t!NQ/ntV> |U;% 9i-oR)*8e6,|(, ꓰ$)jrק]jC{T )[XyO{‚RSgXj]Z \*h9vJM̉[N=2WͭzлD'tZJn(;Е8վSO5r#`Lo/tUJutUP*sgHWCMֺ7tUZZKUAy%]qõ=+6 \FBW-?z*(<3+a }X \BW!NW=EWϑ$5*՝\w/75\?lܨ7mG853XĀsj0 "94ߑUF)='LK%xkm}tUGѴ ZRM?CVV0MzDW9 ]7wD Z͏ JNt J[el+JՈBWNW@I)zte5OUhoսI֒oVPStЕr!ѧo௰a .'ij7Ouh7JjߩDizDWlX}jvtUPZrgHWu &*hj7n(͉#]qʥ=+VR \mBW@=v*(W]=*:85%L./'`B v{)q]Z] 0wFB$L@D_)i%3uLT)G!^;ໟnR00RHg"ՕjỲ*Mw^6LVx*EGk-++,,Is㾆Vm,͝頄[KDzƅnWo[_c.Z_^6Yb8M':T?Frx]hjT-umQR(Eܷh8wT30 HK1{y6x37/O>eVwkFiR…n|pRZiԿ_ {?V?|IltͶW/`4]&j~X*rEgCEc̨M4VrٗhG+wgO$bh 6ӦXn9h4xat{%ͤÕ8bJ!B誥E(r.RH8S,DYtTGYw4zcxancfIyYjxJdMoy2'aU"W"[ymd>u2զLښvFz!rV$nIlz}Е;5GۛZ=$X%8RTdL8Yf`*)PB (L6}i-?n:1B>7&#tW[ʅa;{so݆Tz)}^\rWl ,LL|a|ohE 򨁮h6wp:5~1pjWי8OW~8{y~ϼֆjzߺҎWcWl4;A6CִĐ[ޥih]"z l>CGF6+w;[!"_. ns.(#A9T~bap%8Kjݚ!?؅P~d2վm:WYyDF$7]6 cvυ+}w· MO6+mU\lvw=ObRjk۰aߝ od|} \{%- PnLd;dqDGc#e,`B=m>[h[u]6}Cý%d%Woq-7[37~i64߾L>&t$5"+ wݔ˛-œ^Kqu1#ؕ'bcN,oɕ˙ja13KWʍf 3-&y 1i5DlMD" Irѩk!gkEKX}tubw9#炝qL:;[Zң lWf/z 㯵QP6> t W;]T ZNjӠϑ4Ss{CW.}V=]9Ua>`.sս9 ?]%=|tug V"7ӕ4r{sZZ m^ >GzfQPMxm@ތ%)zPF[ͪ_s[Jy矯Z%6MMꉮ?|r$kGvf#\Q5L,ˆZql}Ymل3AT俆sxu_ʅ=,7/c$񺉹gK3ew{5lh2[`>U\T5/Ɗ5-?N-HF(ܖ(̗{q*a?zyb|rd.1w=]Ry]~v>{_w@fI>3RvyfIumrLlIUG'id*o'~%2@]z ~|@bLpliy=U7nNpӧ\ˎB,*ZL&:YԆ+ᕗ!tտ^[k R8'.6&B,s4TdMra1wԴ J1HIzaTUQIB9gڨ5<S) '%ZK  1PR,q&R!eF:S#u;˝^iT+ZpkU1B6]Y=JgI(%ZDed*G$%V&*\"$[ j!.-ti$ӘSڌ3ˠ,d}`9GNPD sDUy6h42 +ROe4j(4B}cW 5e qhgLZCDê$w1gsY;Ƭd ~FLE=!ԖY"֏A4R?DuOҾt_ȅ吉ʂijrIe6[>:Rv{l@UIY(]p)kYT JM2d@rRhۗZ%KetUɖQp&Yqqu8~ k`aBr Ej)KDVK f$r_ h X q.I)S>jђ(ZžSx34@LT"Yƍe q1ZAr,At*YJhfp! ޑ@]H %z-& ʈ"m` 0e`-#Ce":[V *㭌NqɄ%\wp§(AW*Q3X8D9 Y p ʄ[|X]! J XCS:jr<}x'YPʶVDj8nuVƘ'e7l("fL0R ])bTpٱ Ekt`3V#C J%IViBl!ev d>2">D{D (SyDȗ r,Yp&LZRn@mDϓvY.)l4 fjސw|d#MAB[4@$Kef"^HmTe207&d摷z4!h+X,xDuj`lB),\AL Q*#YDVR d,3qB@8 ! ]Z"*hѩd o -{@ʆ0?}!寘@PSᑐuv<\ĴRZqwu!( E>M!*Wvgay>oIU$BB 4p 4.:) (y&p2WY+cbII<Ƶ23nP0-5RzI"e, сIPB;S9,,ZX؟0tg;`Ka*nyٲ/2*\ks.0Aֆ"Ƃ`I|<9Iyh!Kyn˪3 b6B,YE8&"1 N'vR#E J )zP|'L(yY!}\DJPl;X1G@YIH^x4Cj 7?{ǑѿNy4 , 0x;QW0M "&)W2l2T1z"SY"D;Q۠pL1Xtu餕,ʩ U?Lχ즂\0m't0e0I Έ~Vv |pe\'tr?nL3(`F4ev?=UzCI]6õi@FnQR(4G{ 6pі fˆG?c4BvΌ%7ePG}7|ݧ㰸eN* 3c (P: `QqF&# ܈n,:OZtc`'y3\BЌۉ> j-8PZCzO5MgЙ v&6Pi0[GoཾH~> Ҳ\NN( G$@G|_[J` 8ė6@S c|w+@7X8܅ML:a0(h&ha33~J´kc+Q;3,FGM>npźyМwKiz3cN *з=iQΕf '%xN%(9<]G\ ;g4ƏsBIOJ\n"~ UZ`ۣ! *MOlvQz^&qVQ|E$0N+em@<9pS<ߙp?MޜjP Ρ s 1 }@裸)CSĉAǯz)u(UO [5~<Ώ\ژ c83eߣ@MOgnoJ@z^PɚqQٺLv,Ä1ULj }O6ziAy].o?xcтT}5ӆ 5((R9 6&vZ~DkAݾfbBfGM qrh.ؓz GGaPqG!\tM3œnUKsaD`y\{Yj_2Бp -?E8t8[]~?]!h˜cs4Cǥ^Zv)~mX7G~0al#`/5~;y 7(6 >ny.׋wyͿxm>~ےvj޼F G]\OOWW?on8]ޜ߼={7z?o~yu ҙ+(߼Kk7\\_H6)/n_±ݽt81f_@t3>Eu6?OHUdve$'g͸YLΚiKn> M1 MpMiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&>$:$g9{1n)R@M.bJqqI ^A`4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@Zo]%II "' \AI RI PI5&GyI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$z@(IJ@N$xAR{`"M( m"I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$z@./O߼QO8auB ]-֦p\m1kpi 71$HW ]1n6RtZ2u]1sFWCL,9uUsa]UmZh dmlXz[~h 1b\*RtB`Oʥ+$FWH6uŔ5j=D_銁 ɈӺ+j Pptňp RtŴ)}T]PWdK_[\wUuURUekUUR]=vI9XApEIpfU` ֣+gIQlF1~|O(u 'wN+}pדtŴ9uŔ9V` I c 1bܥ2fj]WLUWxzXuw9W+~7%&r<un=Ǧ7W֗/޿7_ܿAp;r"xdS>ܣݏW7w8O#4L[m8xCyb8?<~݉޾ۄesW݋'ݫgzm7vqۧLOѾhD{7%..lvP!P8|<-<}<}¸x˵dü )۞x5>M)LL:ʧȻA.#$2=Fur͸-vu4)J./?N7'p$3cnF f﷡+q[q&<.iZASXmX#pf\)v{Ib8'1~Q@k S][ 1t2.E)bZSR]PWk*NrtŸ^_ǔQ3QWٛ[ꀣ]1nRtŴ?gfUW+U btŸA̽+MͯcUW>]\&wuaDm\Uejl\z`tqs+:[S:Z\ ]10ɩ7f)bZSuCIR3q)HӦҺ$ uȺLt`]1n 2mn^W<\zZ ڃeAaBٙ|, ·ڎ8n MX9f\' f[4S6kt$],ָ:ܥ_!^CMb֖ǪEW)\ ]P+ƍIRuŔVuF]_bԾ8z1b,h5)>bXpER3"I10eG d- V~Rup3XG:X3X*tUTWz|&/HW.6(EWLBbJʥs+'A.TRoƫV+M(N"VGcp!>Vo1vujׯzh|ѽ:ߏ=fLgθCD$~9;rii~S=DAݞeܥSuZL{J%xABƕi_ZȔTW+U.)LF)bZSRV]PWX$7ܻbB$]* }0gָxM,ܨ%JJZ+*)}[`03΄b ]1pbtŸJѕ3d +bwc&㱄AaG} , K3›T.J6zVK?-Hzn1;V~>]WFuF]egmrt`]1.y ʴɵ+QuB]㳞NOS]1QJZ)sP]}6z{ϒDҺ-.毣 mWIzEuY'HW N+^uJpUu]9ﲱtq ŤuŔ.V+8/HW ]1nS]1miesOUWϢS&I`- ]G;+P֨+Q[5p*U>v0_x0mbVt2Ʒ91uygۛԍlU15l6M& Nw6|82HWqFЭZnNIʭB}@\[ +&$ZE+0e7kf'4AurtŸ!JӦuŔ9V[&AsTb fӦк@YQ]PWb 0pqs+-/eҌg+wgr17ua~xmY,4 ]9cFx{Zp btŸKhͭ)jr}]8X#FWhCi]WLj1f G9bi +P:kUW+UH6[/HW ]1nS]~Kk]WLj"W'ysǝdIU%-1fܥWELl]Lyz4S 4 7)Q]PW)%=Mq-Izۺ2#5*g_HќT]1niK5YuB]LtܻnI@[G %ᡸJޟbJXV{ 5ei]UZp6.ެ4vWʫ; ] $EWLKԺ2% uJ, ҕ+)p7RtJ6[(*銁5JӺܺ uE TW] ƙuŔ/{R]HWÒR;lO`HT݌=~RUl·/Cw}eYb׺q O>njQZ K٬qIRq.xxC$]_ 2n>m7#nz䖤!EB-yiKZufZ2aI/M~z| iy;c-,`(1{㪴ɬqI&$3gi(IshMBk8JM$4ܴk.\p_ ]7q:`;=7Q"*4&YJ @4tЃM0`R[n]yFڇϻma E~vP \+KΪ7)r+<tswV0$H%)h%!H|O;29YQ-{qEZ4OaF Z &&K;>q2r~a2r̛Ԛ$Fܸò&-M܃8!5]׶%C>wlN1Cjm$c; 6JD鎠Jfx[1s}. TJ>pݏۚ5']JiM "8H|xj0T9TL%`=(QDۜ7`3 mBloݍ[T)aN 4WnPޞ.3cC*RuMo'3J[\~8tj'{%dk fW?p{sJxߣCB?_.qv`\O@*Th\<_/zjPM VF)Vc+hGW'HW6[KW᭡+׊出ϸJEWHWS[tM{(LGƤ(\( ׏OdgT[\4*c@>M~_ۇ~o~Nϻ~0dJߒ'V~cA=2 +w{i<% K8ER%քn{[V}PB.bmZ;pn ]9ZZcJ::A\?=S2%3cB `8;O+!7Zb54NrhPҟJ;>AVnCl]9=)fm+c+n<BJ&r ЕC+ΡT+#VmCEt.kM2 8vr(;E@mӊ(-J-0J[S@r}tPv+odA3 X~p*C A P4C)SYdGWF[DW++Ykʡ;t+ҨNBwЕx[ʡʡ+!0"BЕ -t꣏Jc;:Ae-+^P.ЕC+ӕCdGW'HW)[gt4LDԗ- FiҒF]nO{z=&`dH %QKBZrx1_z7wl9GWItCdFĖG>&)՝Vau2S*6D"6S©AG@C2)DJ " xXBv[Rxm%I5Hd db9x iß$ @֣ gB3i Z!``xrpykCZ2Sbirpj ]9苦 ҕVBK"ri ]95t}?]!JC*S+9m+9vrp=ۋZyѕCiGW'HWVPmJ`۞ 6[EWq(CWՊGW v,TYmS%JutR`Et%Е+M[ʡʡ gVK:N_K&OrZJwgOܣ U2 H7mqL 8eӠ)_08˭OtDL8eDWQq&>3 G0mo^/ Q ΢ 0zGLq;u()"Kȭ:(HpF⑻+J/Y^Ѝ ߣۄ=4HS4-nM=W9%J&돫{͜F{$af>?Kaf/?~|O~Zv5Έ99OyǷh F$5Na@rHְ8R!2H 2H3]Cא|:*Ǭ<ݚ3eͮK ~V)guYz=B)@zLHoH`ֻ,pxBIZQA-$ /sLJ LG!.hClUXrﲩB>2RTOL>f2/\Գ`Z *h {6V7jDWw_ +5 nd3_d؂&@DQ$벙g. ec7#dd>3עRg z [/2_[iO! lZ+͏:h"@k$D&><ķ96?r$,Ui<;jJ;U5m7- hSU+a Ҁtٝ; Ǘ>z ŸmT aX46 hc;~ z=bBuГ%4"ƮE<er5, k/ fD2ƈQv\V8vj:63_]B\ :hrcAyAaTi2XsS!++dj_&|/;ߞqtſM\zK44l"QmuZY%a+71}*0VӪ o/C4ICTJnX47vXpH- D8&&X Wڈ0 ȫZ|倦"ݾu#PYHw[v1-#>qH&"Ɩ($& ]:d((ENp iqOBj"H@0K",6N2y[6M?+EijGT㹥DQA5'[5%v`boIY0 :eASqiΥN\ʐ!I"B1w<,ɵRa9}yV}tP7xѶ҂A CNBIdD”D8ҩQ$b޾z\uNE^ⅷ84i>Y$hhW+lUof9qbuXX))~"BP {4P ;ޱ^ sJ-b=ݏx-j Z{}5ҝTB=|H7Cb" !Xy\cnq*ے%I_%uKlq }HYWU S$`$)Jb&T{lXʻ3u㗐Wehvk]9S cr0'Bf2aJp{r^,Hy|x(ʗެ8wR/{՝O7L k{B\P )mH -X`%}}Vf 8w]Ky(\rHe%$"9a8xiJgCغ8+8E;P.XUv9%{0:+ӄ dN710<6^BIɅ!a/*tjۏ˖;$_ Xr~L˵bf r/&kdmf=+SFU;{V˓}n7z)s$}Č8ފbiGI $~ي_Ÿ~o*nzb9Aj׿D1BkVvT2%5NfHER5dՠ䶵zvM} R8rj39LU ȵ: {* r]EL89yTz'*oP:zeTapAy`ރ`eRA9Oȍa ^jdQ$Ȳ[d $57\R?r҆бC;7f:oϲCmUw(+Qļ'PVjȞKxG@{ D0R؅nuVZɱ̐I >Wa{9[ɏic6-Z3n8qfw. "/J1Q׿N; 6=3>^otUv_vGtMڮ`HZdlqů,3`!n0R[q7[?1.rBq1REsbi(ϋ7jFiي&z#n!X 6INK(#k?6}K 3u/w8m@'D k :#aˏlӜ`ܷ8L:}.P^v37Px:=P|uY]nTY04FgEØsA8~B%"TPَvZ!RN.l:0 ‵Cq׌;= Wl !:k^?Mu[I+ɪzT7{ EP$ZNN`nP?]" ^5q."eE`ށCi|Bs ȅ-SfSK\='`N6zQ΅8l1d+u.Am|"_H{WZ]-:-^RN!`S-{:~YLOХӪFD,\r%DZUə ]j8lFp*o8YRźAL/AJ&JT] —4܆)]ٖ# $?? Epyc$DAHija2Y ) rZ;c-?@o_$yZ,6m PqYB.Wj$&NIɺ)%iqȨbssj iI5dQx9/`WSp$Z /{R1!I}iSYdeƟ&@fZ` Lч~NޓLK|s;=@ri,tY֣j„X߰Mx?9>9׋i\1AZf߯XɣfdoNf.Քc3xZSEYo;/BW%]HZKQovJ]Bt\2Ƽ;?"'S|j9qEr`dQZhV Ш=@-_hChg68b}gVZhvY0agTGYMi嬑?uԒvځ&W3ZϪ)ǍvB5`8ͣmFTziԬ8B"mf*+BZ gXVn7"JGzA*{,xe-4pXa%S-L7nV.qsˤM=4Zh)+9\6F ŚfEq ь e (t!T2Nc35ِm[>.l"8HfSj~S>Uגeyz=W m4d${<(z{oݥe뷙}xi3_ZJI"e1@Yϖ N Bl0ʉUŊi*Sd^s)ƌ0/آH`-4l[[:r<2׉3+,; )HFhJQJA34G4=akCc_lTc fF7rb`ד'JǴ註H;{VYNy4R&9%2Z#̌A|O]W׃-k. UjP67>nӋ?lƝb@؄6Zh ]. |MVj)G`Zw(ܶAVcwXHJ|WjVS (lX7?;4rHC[)#iՌ|եjjn/}jAxZb>0rC0r%,鳽abeUи;(y%N+t\;pqN])uƬHBϙ郯)5-( j)妙Ժb Ċhb,^7Ðz { 3⩦Yy2+f ZhTmG7m%$bGt/+ & j+X2Ҫȇ?xR-CǛX-eCd&xdFZ͓: m)<͔Enq:2˸({?R ra#otEP+ aFjTVz7 - }o|5i0_*TԨ#snc)d#P_|Ib:Iu$D%۳ۺwg}Pϔ =u.ۑe^sx gN c*hZ]Խ&˿98Bn?v=xx7w9<4ƬЉ1 qPirb։L JIU?4,QJCukuV}q@Ie]Vn}L?Bh.^UWs}F Qg/?fiFudkoAƭgٸg8/19w)h̎y/O%N>S%wB]HBd yAIvS82}67bq*78:9/T s.#>rp&Aq6؃"0ګƁcgh.9 9 僴ì a7;$A0*(67Nu>LwsN$#B\"R:<-<&y1zZ,}_<:5 Hj<%qN`8:<TKLNΝ_I쭃s{ $C6Ѷ=J*}/GMq%6Kh7Ґ#_Sh (U9Wj"4Ԕ25U_\@srid|lg^ɯy5Hyd)hGp>e2jSe%7=HsNPaJM .;PkH sҪ|Q:'ML>? c:繑;kC{v#I'[c9T7.z$OŲ8DS#: chm=Eӹsڝ aˆ%sf<-wyrefz$H }lK# lt.}NSQ$ZrV:[?Zu+h%WA o}{RiOʅ ]:.\k%Ӫ0rQHLޜV۽3V83.ֻz;?J *+NY*z7fBIo)Ĥ>EѦrȨE;?"af쀪#|rd4]Ka䵭a=q>UgQ9ZCja0 U5t΃6o+pi[.[P1)QãtlaFD:Z[9D[6ɤt X It,t*B,uK"(b XAtY3>ݸȁUK݊ Gc\ )qFCpkxBBchA{ʵGk `qN1?`$(ܸ0BUEFϨDpڈ|v}j-&6֠[ٸGB%E!ҜL$RH0Ud<h[,ȡ$ 4* ZɖFWD?"Ӑ77:J4re|ˈw=RlAc_y)Y 04;?Nct&Y!Pksn*Ơ0ȔW@<ƈWpXڷ|sr$Qe,)? m/5W bSq9,O{ȼi(CTJ"1 ?!3؛()@:Z(A: TeYt-bGjLȴ.P'fn"Ytvپ:b 6`3rpjSQZ]TmA _?V+<_Ա*6DU=pcmõyeV"5/x=t$dt1ip g/Ry %6c;D32Ӎ26ég*(i@} 7o0" 5 *q!9$T!6 )l3ƻ:<q~wբ"i$,t-fywr/"3\(i:gԋ{%8#Q=%^kˆޑ$)W54ApC?qѥH(Fz-CA'wH񾜁=ss,sд('VдtLЩٙ("z2CYme5D_PCΔbɴ`v .מ1TǦsRؚS}*2R4?PxIsBJ咇} ݤ* U6,G(ӈ(n2dO!(V=`qй{M)/.0,483}7%o{\>[[  cZ3UCI^i3 Ȋ"*}thCE 6 aX=CC[FyΜuZ7;|eߏL-gȎ~43gK'|:[,<~]wӎKιh9Yxq{k۩ze6,r>ͪ84KyP 8K u9)&>~~P "f\[w%z=3>RMl`ls@<68YqvZg(ȔÏJE&*5/Ful J-exxUƻ+ 1]g3M%= PLFFEUD25u12|O(43P -10~f0uviϞMݺ-\0{.D_܆ˢ%Z*LCCBc'bitIuaoBwBJQI W&VY&w?;K>ENs)gX5^9:&L؝9fOz9^~j9)8EPaLfG6 cM^ |e%P*.uܽvY)2hR/eq7Z{"JaCJ"K ˔QOv %ʻ#׮2"Vga~8a`ĆumJUJn.?lk㲝A՟O 0!ܣXAAsui z \U|]jf7ɥՅKh%8vYҚC~-Fĺך6IMݺ=;?âQc "ͧ՚ ۥ;蟿7nY Nq_\~/sbu6TWɘ3f:b=GBVdTBW.1Ƕ2u#Q]mdzqA0cշ ~[?.-$D^PiDbgRy$LAa.U: {Gi!h'GcXI(i8t7G uc?ۤbU׵wò!dH%De2\𕦳-=ۛ,$(J37E!Vu" KޘIY&N1uN`SSՌDJqCʸjj 0Ӯ07;bE%18r1_Bͷ5xb>[.H?%ı|}0vui /rypV?ȃXr.C>פ9^Xϒ$\aʹӳ|>kԖMjr=Y,0UgAs!UM9 fHPFSjx|w}q|; Gv)<9#(f_/Ah&{F;S! 7}>E>^`t[.#:OSݸb3Ғ^3P4ǣ uzdTA$uygSt1W:^F^qfעٽ+eBEw%GLut" _ҩYZRNDuAs%i4>yUuzvxƹRp/_N.oqbp(,oTvh)9Xְfc]4uUB-ON+x q~ҕ8_CImbh׽g+2ʎ 3 z_i.OO%f[JUˆ| MI =;3ڍ=^=j.u>gNEKN{WsZ Eޝ:7L e^mϾ*.yҡB]g [k4}*2k k ϶cf͕?[RRY),9u3ȹ&` /iuSwDdL¦JFy ^_]SCV>t<2J4;wܙpn>_eEިKؑ=%+i w1JanLzgC :M:AFH$v5g֢sOմ0;'̎ z9abyT,".e$8c$z,b~A_T*)PVWS.%0#X篧ug\?@tԛYCXhEB07V$q`zظ6ǁDF*=ŒXwyAVDcG* TT=|$Hre)\G4#GPz=zj&dTo]iz帞v9nNlM?s{7-4# :a ;<1NK6X%\@Yw3xtHMw#z^8W<+ҡ \GRVzF$ "\` `Mґ (K:rgH2nSKvǸ1zTtp]p,b "Fac$t4*w~ B0M\@Yґ x(]!ΚxSAW*,3XB@i0;k+t{y)P^ݭqDR7|oz3:9C J6k)# jߕi!ȣf.`5,*c*}%\aƸ/F6't8 ] pjwQL9!?`Q[Y M*W)l(|ӡ"Y0d\K] ,@JDɢ0DƼ; wq}FY)4hM4D7e.Z }s},NeEl)ShRc[zu-&&s9ҊUIJ HYd nQGuD+ 0*ðk|EgxG)y7^ĭ{UW6FMEXSn;CP=CPs(߻D[8%['mTb5(hld͏3^3hC;E Dd'Z%Y$G$ዻ;D1u֩kۮ,ƍ0 #|udᱩG b9`_gSRxHaqq iM oF ǘ4Jͅ8S uq5V̢&IkjwI2"<ɩRC0/AHБUFChAD(.5[@SUMN/B', C;WUs< 9=EcR^מ"8ݧuq *ОbMpI,ezV؊  >8hWj#* .xߑxM\GI) ^nܳ}`e0C$|=_,} w+]K(#d &]stIUIŻL+S;^ɚNB!i ?6QPpE:2JRs?@KJJ/+~4tKڶxwr1) c|7EAҒ0~-8E ¤s^̄+ K {l` /[Nm՛a5H-n>,|^s*L|`MbH|w 6 4f fu(R"jHR(2,Rl 0!gTwVwל 7P 1ϳ"Xo(4e^Ur] I>W|z dx Aɗ,y[l|z}fLҙ2E  *=OF9v)'p*VyqQF4(ʥ=ӹ9fM^}O˹oU^Eϓ);(V6M uvpLcM$ 1ƈg# fbi5 bp {4A` \غ~f4gmobX~b`LĘ2 '_0S/F^* ZOl`?aYbϟ3[j0uWO z~w$[:k͸xl(&-4}֎~ӟ:³kiN7vÞ:ü3]6rmxo'z?%E5 aa6 ow$ҹ:WT\n02dDF,.6wuK]RE꺜G-NEH *0pVNy^~͢ fCN\"yqRN :Lfr"h&Kc#<+![P$;sE1jwFXWpW/v_'tI$F 8I ^ TBuT|8,& z优BdBb_\DJT< W3 ӽTD2aPj_1\)&}849z0'mp {w_! \ƾ1={Ze@1cD0kQilcmDnr/+mr)IҦ{/JWe60,;D4Y6{w;*|\%4ե):6")tO/ ֮44S ^,Iy%&A_:>XY9]YEhb;6j'TKA c-(8kVhT2L4G/}]]h옊d.r_]ӗBRq_a:Tf'hydPHD=(y[-TPttW${|4:Muv_߂#鹛tt8ExnuJHy=׆qB[$2# YECHP4bA񘒌JhdE$hL׽*.*wS? Iq~^A9ZǠan2x@ydb45:SP ǂ?3;e# 拎lSVwCPLr`nϒysKa 6jD3sl!tѤn< cS|D!w`@@Ϭ{*s|*wpu8cR IQ6MV83Y, Bl4Œz #rEqu6jF }S+>쌣Agn89.;f>iYZg8NmoI?bn1H~:/admt'Z~~v5Tq-RGſRE;Rl0,r! _. iO+ԯBH:LC )11B; 0B%}G|# UzK U5Ty[$tX`ёFe~mn)kij&C(#h(~P 74TSfARP{XiEG2jCo]z&Mih^٧/Bn]&o`Bf3%DՔ gZ/ד hh%4Ce4y> Ggͺ4c `aqK '9rJ'q,c s7.fѺ|p4;>WmUso  ZcpC6 r"bI2KڤiӽRkwÞ6Ů8_1D"K+`&@6a>fuL(k}8%4hv̙bEDmbv?C:8ŴFi/FʺG˄ФK8 OR4CT6_4tp77EiUe%ixrXN Bnr_潀Hk7+C%`<;褚)hU!験<7F0ql`"1{8- ka\!r_ tQ)?ey 0ڼhU*|*m]Ɯ8y瀜u#eXan]Hn$T =-pUK}|y:| F2DE&B]ᦰ!uc|^zؔC 0{& sެz:04m &{AeAv_7FW5fL8-BOan5u~ S()xS?7& yǥ.0Þ`3LBγsGc-2 MZ ŶN+)D'ſj)GO[xlC"?C'{.>DEl]EBTP[ĠD\\8I/ٻ8n\WuZAU\Te+٪)bV4#;VGHiFecG5A|@U[8&tz8zcq1r=)a&F6ėx[)#{Y1b ҃JLeHg"G3uMpqkk腞z?O;rn.II lH0>VN]Ŧ. s+JRb<@9cJ(-S+s<2 .YT=uՈt#|jo[> nam'%R/ۚ}A$RaA4>ˑg"}~|ͧ0fJi& :"@ Kn)Bh!*"JZB[vl12@I`D6-0y5:y=p,5_j &-6U.^ɏWP|WxܵN>NgfÃhx ax2{TՏdO'ˍyvM7gOup.sІj Ny'/;ޏ7.ɖnWyx7M>/_:=?Q7,Ϸ5r3S}>Ƈ|m6}xυK>201z8mzfTuI g#g8J5Q0py|vadK Ѻ5cLE2Z5uTs)2jj"4msOX!yxvª]".fn4qQ<~w6f$h !|O@ |c(0*tue0^<Ckw?.w_~a[. gSZr}YܵC}zN*}̊2Dq[k3 Xa!~T!8!q ũֵH=ikFXڍ5{K <7A':!f>͘0&cX !^p6>6 WAq;v\ ˾BǏGpv^njwu 2K -񒏈C~b9*8o%GC1 |v)bc.`"_|]YA|{U'K^X]@"-ٯ{ICG= pHA{W cPÇ7 N׍ސ"Avo0!Sx=JNg( *!&C_$nw54 2*H|j;{Ou$>G)=VV,G<,,r--4K0fwHι # OޯT#߼ T B"A:ܭ[AbКx #AkLGZw=Nu FtƯnO`آyp#[%{eL|$. q1=ci cbmZ  mmE:\mp$ P`J"J/8d\lEۧT+}고oS[SJ-׷]S;C=u,6|5j)C5Fknee篘,/%á:=ngHP`hkkq11=֚7t K4v vo-",:.0g܅r_z߃7qM^?Z+$e(F+Noժ6_V b?KVJKÑAޒS_u"gaT@zI@=յN焚181|[aDy?=C-l wm\k{{G-^=4~Hn  %^?G5 Ƞ#/(R7Oցw̝(9|;L36|J /0_3N$ǖ5+&ښ7|~>ϐKf_#!2Zr^Bhj$!XV}-=sv/*kȹbX ht+S/쐸\sƫ?urߘ;_rg&qj'!uL' ~\ل8#ǑDr=*g*`Y*ZITeɕҢ5ǔLf셝lOZ}ZI?-͟9雇~:x,MYځphJV&J-bmؓkCj6\cm>^zIB 3,9¹Tz+zPFR85O}cr>4zm_]za;3|%c=e<ĕR=e=2x:®!nx.e6oOd:ynlgiu盘Z&.)bM:gU]sĄ!R_%f",Z/zjh0%J ac'(ߎgߨ{[@yלrJɓ\izRZۇn)?/H.o6ںLkԘxh|ۧ;y~96_ '6fM6!Z" zKs6({[/n)yĂ=v^}Y!Vc+F?-Di_BD*?Ϸfw jp/<y d] ){lky\-pR^Y"]. o>$E!Zf69bHQKj3)VJтl5J0mKzHyX)[n e+cƆ*M K((f@@PzSl![u^ƅNHXأ{I`\(gڏ~w+uMέOCꧏ0>.H5? MqĚ!ޮÝ(Bfǜ|}( ͩv., ^w1dG^D4;=Ӆc0(x'_Jp<|4vz#ȕ[ l#%V+}COZRrJ]jTJ,)5㏶ǾUJt$F|_9FBtlP㐰VECjX%s l (Z(*Yə8[R^Gؤ>ld̑IK 2(f=nJ:Q7yuwFtڈYuիW\M4 FS&%k#I@*Vb4Ur5 e2%)f˃3ChgvHrz[ B q8> q[nеTURg)KZYJJge09 )o2EQR%!Uj&ѶС|\ScA[19ŦlLzHMjKPJpR\XHm]l z󱴌 TǴT {~HٸDWaOREڷp8 ,' n]/*:wmڼ Ϗ/ )=u{רE}3;Qf$j;"½q8~AOt,x;nAh(a\񂈷7%[gG#q%!8F6ǁǥڂ2}(>>YqJ}|Ut:k=tQܲHgNwWy&[c"&[XB[Inl; jy42{QtzAz*!w,w cO tHXК>JqϤO [B*;!g8os ]$ܓ~?k+(jMlu*_:5 Ɩ]VpZ@j99Tb,v$i 4w9^ѵ3EWVN/;fP/L0m8~[_=Q3Pk/T99&”f7:r )uR<L9 :II6:HK5Usn߫g84,V_G-Pn0cK!8ܣX.3T|oYs+4R !dk2[5.f*"YWO.-wVŊCK#c["R!ΓU?rCjsWg=Po6JjZO8}n޻TܺC}^d~Nk}֒K$l*O粔) VWcCu:=JmHl-̣-D_ma+e0 >ACSؑzSK}TQȹlEuчY  s|ݺqo\w Sͫ_.\/x0+oCyNOOyCo >jOǯNɫG}lSnQHOO+gm$Zg @1FvsJdWcNAפ]]o+ ClG8h#X oő$T-:T+*m͵YԜ=8>9zО"V?6l;a5{R-n`}oÌq&oܒ30# g#<8G}BHY2WFaHO%aԹ]:7Y 7 f ,Mn[~rFzf5?})VV<=VBTS)ۜL|P~~#r1f0ϧwLC2ݸo`67R!/}ά(J q {Ib-P 舕Z9ȟ@z ռP L4}phuU]Ѓ htK Z3sR l9OHB^žo\Od)ؗwq3]u\-7w\n521MGx Jʖ q!H|1 oQ33b(ɇ9^g1ܭşp[O\Y賄;O׋+.VY󫋚7Ö_Ӈ+5Z+6&KqƔʫ%4g3&{.R˸D@H$'XVX&p(Ān,-m?V3S`M^3HqE< Dž.vӇzOڍS+uhR`X0H,nv^&\ox}}!R1ݜ}ZԲ9qtnCHF] j⤖+PkhƮ:,K@w]A>e~NR = )vv]ѫ|j#ˑ,6IE'4/Zx*R$@ϷV!(u&KfBkƤ;{=$ z8Jci8K `dN9U`Zhjlr4(MP!7dh/[ml)9l}fLC|*r%pU KmA\j@ZJre2bRQR`a&XqvϮhMRcr\G.g#ё K#{Ih҈!N VL{_^-c)8PDG~#TՔz+l(T+LNwQ#fS7*>GNIJoe>ɻģ$0OgaV._cfN$O·nĆw,ڞ;WLCщ *pʑFQTWaNtXj,<:,!*|Ia8Q~0N4}xhEEH|br]N(2] qΧ [jeb"gp=ږA%P8EYŜ֞|aȻ5Ϥ#.:'vKi}#݉a"QEH`eq{ նR+cHhN4RKڂ3ROR&1$YQl$TZok(HZ7!;r)O T1̩a~S-4zċl3,,-bC̲fk܂,k*xĽCжER$#ayLcFwn C/c#YyAʉ|q 'rY,tz I[AuHa0ٚ>:|L:`c{1W*nI&SrM?6@:a4LJjiGnCxk 9_^L{2$[A9_Ir=Q5>2%ab޵-Ǒ_ 7v>Nlv=n/X֥\%Y2n%*<v^´7~)&edQX5Ā?ֿJT{_}x^;nا1.}Jk+6"%, >8h!܋oUa!\R)޻`\|Ma,E$./Qh eu5 k[|NZvE?k Yo*Ly·}kIVН3Kfl{ԓ~%pw5yAW yAߔWAa~7^@˳b=b8uՔ.R@#!)NHs]oT(ȍ{s CQ7V08VIb#٘DGhj2겞ܫX["79l*7Ad?a1IZ0i`+"Ņ FM*]Jr,P y*k0w~;4/&8H:@3‹(Dח.ΩyL#mu_(4j !8LtIu.m(T{c;EESOGHF].26}1+4cv-G]/*hTHyEMAm_BDRv%5;p.fgυmvTR3dzwu6@YQ(g)ʭCr9uayaHNĚ,RG`A3rkd4tA@sE_R2kP?E"iؒT}HNwZ\LPhk#2u["ޜr wQ.v *VtpWA\*9=(ifTBsa.F5fާyyjDkr`0fo!i4vmɽ|,=sY0} 6.b?XaJ}ƞGV#Q$.d2Xc)j+JLJu B k W~s59^8z̳`֞laO>Ar&5-Zީuc~Zx9M !Gcyf}GVq( A 9 N!p\ǔ?~EC$+KɰR'ut8`2Ԙ]mIJqYI!K㎕AjHDЃ١ޑQak=>GP5\8B5&-8 LECv ͇ȘK&EW `)eObHة ;!wDy5hF_~mz~W'8 ʸW 琗݁^)7_:lq~Q"YZK|txJ+FzV k5h:4$0+"-nkд%o7snqȩW12?w11χ3aMDqNf(HpΫQ]\a:.>Ai]PZ49?轰 ➿'AUC9 ش'dt>l'IOIo6f l7߮wk ,HuX֯1l_/G=z,^7HBFn aw8*Npn ;] RVRoh ZBf"ؐ=7}b"ʾwxlG[WncAkH}+h{*߳}V˂$!֛.qK1c`gRskNg{Dz<&!9L༬h4;fɏٍZsW9pߎR}Lҳo 16ԖQgNaMMH|ρr|K`t&܌ FB6w>Q_C{DPkL,X9vζ=o(T` U2GD.R z(}in@Sb0f^'Sq{z*c=}1C plm/h ocج~p1s[{V}wD֘E ]rb ~ C$'dtF"\ZS*phjU*yU4Fɝ7Y|1p%hTQi씍4ђ%rCc,Ո 2:hHZ@p.arqCDӶYIǾ!pU<M߾_ؖjh3jXL,Z[C ds@`gc=Ph(bhHӖYK7B kE'эϤ|hKRM.vjRm 7V]y lA###Q~TE/w5URf{FDK2`A=σbYF%cD$eÅWT[2y MiWm[E3S^ Jh-W[*$S `,H\Uոj:R9/=K}~M5Kf#p+}7;rppH޵h{"s4iz'=¾7peGm_;: ǃyæ޹nd`N LK7ŜA+RB?J̑=.X\=]Cne Ҕv|wly̚/OG0#|Ÿ<4grKS6H T 'ƣ4-MeV zI\%b9cM"h>N)T=ɉ7HAbw1d lS+n01FDFQ~Nܸv3;T S?:%5'7梎ؖ ]L`A5@3+S*5*Aziq-i7yP]y6I3V"t!BrDa!<˖Q, H ƀ|.]@o)X//K>ye^[Ah',1agVggo̳qv#6 L;;ב= o9OJR[vH2[c;lq7jEi_Pvr$wXEq7ewظ+*ʳE_x]$h>sg#Uͫk=+ oQ-lV̈N BYo~ 9V{v)-a˪r'Ɂݬa9%E&l!HsB$Qp$pf/423P ه"ȣYRPAµ-1y 71$\ӥA˜@0Et!BrFȢrڴBȢ(_k蜗_fw+6ū{wH^YE`38"Xt7ŲXx2[KjjdI<[**_;)r@:Nnk˫lO!HbG/a̟~m]b"@PDtEsFYK<͉Mz({ˬ=g h}ccs ̵9& jdVeGS2y>w(@tCpZyܤuʒ](Aq<nShR۴ -**YبbeK"c#%dbj2bV )]93MP42P) Mh chF[)t*DЬk֍*!6Cd %u#xZaTeA)1h9a_}y1zC5]B(1rj N,e7`7cHa TM⺍`L]NHޡ Q6dU8aN#u, cő* 3GBFNי>4~nߚ[#q!y$aIlF1F*$ҔTK ~ o(a>Uב d<)ôpPBOGN$pub \={nd;[([ve 5oTEeKj jdKvHł#!m7-&#]l8N&8v;`{d%?2l`}vlᷓ:^ߌQ3ʝQ*/";d4=˛𗓻r4: (pZcKFǫ ~FJNSV8d H([ rV9o19_QpLWˈQ)|mEf'CvtvfE*Fƒ-eh]}Ӫ]}B;Q;r<1Zlֈ>q=Fh74}P;  &,p͛9&5irvIy1A> ehwЮ]ĴԙyO]6eSvڭ6}Z12ݛ~*sC@Z),?n})% D?( egajiX ^<$~c O*& j֯DBVg67W%j4c jWfܽdxKѠ5~b5Ly{{k簑s >^@簕UNHG|n;>3H玼61鴇s''| ۓOYX n#pi3u=\3&rs/a' tA$Wt7Z7>/.OE(@ubo $J䕒? -*4g7#^G4uDWHF3 MiU-FMR0 Qn(j ZS ˈelY9r WV>+ ^a srNbH*$d0b3VNbt7ssu_u ?k.ǓNoW7)ZSVO/'דKTՉ?^;\ݐq˪Ÿg/E84fp㈷{ڼ>p'?RJ5!4Fn-ubD r?Pߒd=Վx@ɬ;3أeu7A|9l]g*߯mGy5z䪪dkiu~v٪L[ӥ> }o;W2fe29͟\4͋B\ͨD{=XxY%C:~!j7XyQ~xthF㛾X:I{SKVdtŰ?TYln*󓬆}y|ةU(~,VfGpCтYcZ{J+q |їN +CQ zE yˉ:ͼ ՈTH.mTsGy[aC>(5:8\lSH]6@9(Aj'(3;hr͛9vFN,L`ᆙ >Kb68V حW%5qU[HFVeg3}vGz>pڻ| p  ͜)M!=j.CV#slzt|w&̜Ȯ7 {`_<;69AOYrϲ%R)s.+#]֙>݁Dw1fE*2f`rt&H Ӯ+{LˈQ+\6y)n#%cPDF̈`L#|bd:VMmcucctFn k9WXaG@{|Ozd=-߼}.q(hɑj{ )TKxd9R͑j}s&[re彇߂ A`rͧ$2Wr1}T ^uh;'.D [6sҦ:3^x۾4ڽsh'j^'CsZ)6:˧ ڑO?,10V\w!͟OY/V?Gෲ*fY{™|{,X>z'iex>)NDkGĀ10xc3u#i V?M_Q&̊RkS׋I*KY..ϖ!T!yr4h6zIVj3gdEˮ(G(GG oJ|'r'~be1^f|}7ehxMr&>d}1GǛ?c"9B?KjDTQgt=W~:>eIGἤxq%9(Kk]XXȢU ?jloj'tj];TQzY]G% *6SԊg|u! Zj]Z>&=H\$tZA;\dSc=m\7՗߿z>d_/N&w3|Ln}o#y8cM}zbĢ>}U\(߯Ƿi] * %v WYb=L1-wr~@Op&aj\wYsϘ}3wdv/fv60y?n'7xf7I}& }]Ig{L?R TUqgS14&Ʀ,2HECW*.]^$x`߸x?Ǡ L[M`RySj@\TnUu{Y"ӑ*?v#Ѥ2+, "xk Q|xSkU %Mrk.V9>wPphV^2Ag7RNW%%EVޗnrkS`+|fa{5)hto~:ʤ!~Ⱦk;oIʼnT$87X˜1⎍YWz2L;C[tд16r׮b #5NF[j([Bkm򄭦څ9%>w:љ 1$: kw6< 2R/חzv|aB,~@%=b@a1d~VB>| ;Xk2#n (EVvр1oM'p?ǀ+ TǏ1tg7r3wtȎNQ sꣷnZ<[xt)W/Qa>d!"Gٻ6dWfRU-,{8H liI$q~4 !GȰDzx[Cऐd 6jAb,<\;k5f8K@mG*SV}ѓ. Bٓqv<xKncH:;+kҳXVKR>R#'Gx,Ֆ5_zʬI'7&s4dӐ~Z\? "*/B;OZx/,RBJ6y`0͠*u.KUN"?ռXp,<]r\!UM/ `k>Fjꆹ*G9{nvm-.ڭf>YfE{PR+&tq#AB8T{DDEȠqF#nu|W;&*):0MѤM)&=F!SyDhAZNOxz";QUd#[rT bN ٭%ߐbDvkEg!!;ynyT~+BV>)'1sx|PS;EFȬ%LL~YʦXAJ9T1/b3[]z_}h O.]g NmBu4=ihieDhsNk$Y4Ejg2E}Gw~ aְ쵕bHJFŽ]1iUvhV#[5S5<4MSBL\6V]0K* # RfnEKPG_66`2B.¤, #B]sTvS˒i H{@ rL-Q|Z/.}xM 1,㣣pK3#I@Sa2(B^.Nmϲ" iCD1kWL{6H.h U.q eܪyL D%'DHqrp \/7^_x)XFs%!k\C%붚ElySa G@ wӊ;33^1n(8R*O3t&uot7WݛXf#):~"gmh2H Q,/O? ӐzѰ缻Y6t&1ٌMv@ԀleOt\b٦7*D2$ BN:XUrD *^oG?Y`Oǔ/Vѻ \bOǬewbv{Ġ&0F,s6N.Anm4Ca9^q?T2Za@EVa}i|T6=\ Ǩi쨗x[3*s,mэFocIh~1ERR F(% IR}uvzalqJ+z~N:Tҳ+> h`.|7{Dm/BB*sw_]oP|_dB ȧ^Zp J~|73E9-»#ABɭ& Q/y*T1 2tSRk`\G5!cޅ%M5kԡxdy/ U]ݯNVoHh!\;suN!\+5( Y)P.Ϧ4a{s[].r^\"I:JXT8̒c,Ynջ h@ty'p!CA`r',kTg38"siLYYuy<(.8) n/X/P@H**HJEZk_m 5(E,Z :nbد*F>:#Dhgm5jČxX}{잹&bP J6vpk^OyXF¿SK8eXO:}jV- fysQdsG-Om3+k "8O3c gj\f{slUtӦ}3ABlC1| FII(Ox!Q}O|rk˓[Uuo6:l5Vv>` OCFF2ְ>lj|~7ߝg?zԅOCf)㋋jFQy2\`X{pr"d#Y1yH1ZA() =0uX裷O2|:#'wFUP_i[_fAP\X{N)07p$uR 6#$tšΪZ߁zj}X Ia(6^pBEVD ߁Vi""= cj%},O5/g}!nO/U pOÇ.O8kFgmxdWhhx*C2k}1nhIvB/9B1G٭!(+\ VEI[!l-`bzŠV`rfaнă=%AØ$-,m%zaF Q,b*Ɯi f.؃N2/C&eD ȗL亃-7y!s\{1!2/BFy"-, z]$,vꋒԗo(=ZonxLʤxk#>`5-z6.eCObl0n/޸&O\E\!~2km5I͙^ryٿMY\A ,^X WjӋ~.1:#3qB^\->K[pK(wfDpVӥ IjѺhO~Qe k"+cIdkN%9:)Za:Ow%`0r~dFcD=z] \ 1>tD K=.l1zLQFCNr-2qÐ~<]){TXu{Q @,*T1 ( vy'8"WxM ݓL!~[3s˘DV4'yHzJڀs'2 [pp)h[Oqr8 -ùny.&qV;ԘB Z1 M *- @cv:;jxZٯׅdyחbj3c{%|T zD% 5]1&@V2/~Mk2*4rm4i3E JnP<`?(;w~Kg"&l>=I߽ʍ 0L`t{!Xc@RuϤS!๛$?+6kإdFmf&,6mIo&n{ Z~R YvJ ;QٓDm 6P[VVCPKlzF"v !GrT`(céd%|vp. &'Y- je0bg77999竌> szɡY7H@wR^p(m^7^,v#U atQ9tQ3Ε=n^0~d룞hJxk/a(+lg~0q}[,kB?N$B3 5Tl28~QڙdR(:sͰ@;ϒg29U< *aMO?3 !j<ʞG$CKk޽y71gA%!63n>LhjIv{u7aݙշ|voZ:euŠ7]*2tdΐh\{*GIE8L~P]&MJ߄ @4°u.u);Q€R.XW>:l1ױ;j nh\./.'4Z{#&l#P:'s ]a^|ۄ'Meɜ$8=mc 6N! &b!Y&BUś/nNN5A 7^8,ߗ|IR Knc(?B$ȼa^ؑyS7.+ qAwcWωil!;V;$~r3-["&KSA'Ljl6DLg$$LRMmbJ0 (~Hq }54q ![9\<9Her&$t3Ӌd(Z2(dB~$fAH5EVV wsѴĊl@QF)3fȴ)|F0&%coYyD5A^avWhbk1/*wukt U mߘB%4F4W;ej{g31t\gp[FrTMsD[p9c2⳸1f+8VoO"(GtOb7 m;^yJf"Ӓ*E w0ó%X,vUosc!q b޹.yyj- #]Y#G=.g>`4u{7bE%0^t j{q^^Ü0ml2J^7q?5 @ `;?b_ $#LhT&HA$!Gr -N9 fR-h(P3vLO)uao[5ͷmj=)%C 9AQNk9> GX>:Vhf*jefz՛7Qߗ}Y}XΏUڱۻ1]wC?.+3YhhMU4M iHbkw= 4>y7'f0h]u]g/ @jW3ARAiyobsX@Z5-ayI‰xZFs_l $(zH2AX4W&O vmAEٛOAzmf>,pRn?,`ȭ<?0Y mox/: $c|a''_ O;95\й1K֠"g!BVy1O_V$::O\FSa'KFђDĐ2~{կo 3Ī# 5CdPͲItZDsns"nD|zU?0(p0[Iou wwaBL-Gd'(VU{äbr)-v:Mgzl_A-78,T!>P(1zp2>< ^6z>|ѷס6|\_mbߴ! ,oo;_*G27u껐n5uN+ mӺ/_//Vk{̇j/I7A4_: owZ^Lp&//Uig\tZ_/npp0q- tne/^o=>5,C]@.|}mz&1O__ ^ݻ?`/\QV^ǠM7@G~u;ŏ%5K?4RYxnwo'z;]c]~woz퀟-IS~(>&4xU>`^Uۯ_vqp/>Gj O$oKQ)^@?:}׽)􈋫S> )+9f5U^i}FXh՞BdԢ6nk!_ gze!/ɴ:+4uGEX&xq$ƗZlcH8ur(Ǖ8 -Gs׃Ox;x ʁ܌O/c^M>mO/5f8_Ʈo]rGh]?Ѯ?G~]@N_uVJ"-2Τ#yތ9a!'`7Z%>5[rE0<0>LO5L2{%Oa('!(F8,=LIs$keyilҠ(W3 dE!!C8C8d̂T~ ͔h2H&`BS`  LN} a0t7\ ʺ2(2(ˠ@šJtTj"p5H.hW9#xrQ,5}{Nj''wQ܇ g"d3}Lrx̡6w&5baV`Yz#&O\Ek8rLXWC,-Վ,[Cjt2 0֪Ɓ,9U^ !eƕAʀmXȖ)㹄@CfFzFkgSL(.huHݳ[tk!v+SqaDjyv_w _õLNM)F.QZԀM_i/٨q2 __4g}K?> \|jH02(yYA HN9sRJF_e6hKQ(Xha;-I&\]_|$XS }}꼴'q˻a^2V#{c&]=t`43$8c4JK$H%TW:/5OC{,PV,bUʒ?4d+Qk)̓YC =)xddޘ#%υ' ,GkeN I<`jC*L &IV1A0Ҁ}F&Ɖ,EnKI!A&J%#OZI{6S$9s3OC"3df L )>OCzǿی׎A2xǧ*} Kć2oo~oQXx7#~6Cʓ8dr=0 _uyDtjOdo1o^XXҠhܩ{\䲤˗X *#'bǏ2U+8j<:mJIIJړo9v3Jc0V3yGNc1K!;W6K W6KNׄV7N㺯1/t.ݨt5+I?B۩2b O5Yݺ&7D&o}{dcne zmݗ&ŗG߶>'+}_5}s5cf\mX5  0X>+}4>Zd@vIcjVV37 Tr'i{JrTS[V CEdg"RK0Ň3-:O!vć;y{̻K1تBy[ݛ^d .q,p69>٥V 卧<'g\Aby{~U _|;#@h%IQ&D5N Z[,r4'$%5h_ObXhCr}@&r3VĄVBv܄xԌ+Q4*٠ռA +g9[aO,\ [4:D_ kl =\wR>nl*RtyA crKtI?wi2&*s?O;һ𽕲;{:L}m/ RI),Ѿ-Z[o^.i{(O9Jف_1.NJi^GtS(Юz$2֕J{'b(n8m̱X Юc*Z![Q+mfi:#=bvG%$Ef:Hnʰ/P-YG*{!;%qAP XXV/ץvTYgmx|1a*=1qXٹjKGx-2x `Tf9ٟ3XjE"Lwx+'@a VkIBRI7Iѥrn<4לlHY!ӹ ^1Dvpxb2oy%+FЄ:JkpY~(C^PZa(w.{v5|D!~Xmr&8fJ%FoeDVËon=}v ;~P/{lקVr) m ,ӝ>bh5"7W  D՗]y]3CҲKv~yesmR{eal:Kn~F NE  ^kkwϗÒl~L-cgt0IEj. f%sFzVCڡ|rLr-tðuh%]:KzۑaV׶>l]^?q7(4b{OBk:_|*?77+e7&\\ %r;I-]tܗɕf)Oۿi~qԖ.5'^OO_,黏6ULZV\riԕ%B5go80`4fgCT҃ɤ4$6'}oiU*_ߍezU/t4~t|nT_ oxW7Y }В~})pi-LX_|PrŪ,J;E&]UlU~< Zl. Xy_=VHT\lJؿ]MNJNwؤٙ;!QܤhB#)cLs**u/`IygHˈl˸X}O^Z5x֋B<^,]]{..SHW<,)=i}E5 wf봨.;,dҰ2tI,֘<WgTcRk %=X,fIjD EcQ)[HVX#Cy8JcQaŃ-S)5P: @+bxs?arN#;?8}bu+1;c h֢a\/ژtBUR8KuJ z 5aB($P%Wob)d{ יVAFy;rbnٲ}<Э[GZ)^njvϤ6&=8h^(fl %н0$zF&fY) #CH܍OCPp:֘*ŭW9b`TI3ĄL%)W8.MtFJTB CQ}P5Evf@X yk {DEY[P74C"|F& $U!&CQ$E2ӼV3=Cq." ^PzuXΤ ]@oMz]LqZAAȴ<MDG%AȆh˼YH"*'3Ȁ Ca M;I ֤WwmH_eqؓÀ?&0dg͠WOt^K<_%mYMٙĢY**{O-a~Xlt-b\`32WR2٦>Ђ7!K6̒)#tl!(CŒUi xlgajT3H/'װnrHhmjFLO2P4~x64~?*=N2>eGYxBZOYCŰ.!Hjs% vj8q$W৏# 8IJD5DbIGgJe Q<1.j7yljun D=u;Tq[/YDw'ޱ<-ӆBh<цB-;qQFh:ѣkӘ}^0Lv&iLQN9ٴyLJOfUY ĶnM?qAn_-r;)! iww"7y],#6|^至<@$dLAQ5gbPptR}Xtq 5cHph'=c`PZ0Aq"b)P^Tp&8/Pq⟥%jMy `[v?%pXL2OִboΫ__ND[[|1ſ7"`LEh[4owZ4zM|i$|}jK{,WKHdDVHdDV"_dZal=s5h^ ?Z e$3H$_#5n\Xi.5Ps_tД[0@7EhMc8,/-رKL̝NLv+7-mxNn,-=d:!)Hnw2GG7Oc2™$}:1sA)@. @F f5>(8M|G@&ϤqQ 㙅'2ogMqB5Y00z3oν;==bP5^M~,II[x>xV0mxN0ǓhNGKI09Z#I f1dci3e׿O8 54>4f3#TNwCo`kD ,<o1uE%Zs<#2YJkVyrKV•M?Bm!8NdnGEOcr\3. .N51'AD(T>a#AyJ0 ݃C ЄKo~P@$E; AI޺v!UVW2SHϨvQU8 \`BMHZv#2Ӫx'oˊH }t9<_owYZ]ŪEKyA٧5)Ū 'TW4 Z3лD$ZPTβK[ks+[Rs+ۅ,=F. x-~R%wJh+s՞, S,`cum4Y/$-օ$zݥPioeed=꧲Q3ЬZZn^U352M b08e,PB[%D v"k5g(TB/j|`He6N{w!SHG" $L@&Pe![}KUKy&Z43`+!o3F ˀ88(" (` $43kPdIoRɧ/ԸP2j\ 5Gq"%0 Ǎ q`VQϵRJlFK>oQkZsD&m$({9 ^k ?f 蜋 z1~8XZlpEupnŖS˛zioY0=QKkh»垭D7[s|d`<=<4`7a}l4pCYmEhCMiCνq`K#Èh~'bJdjϘ1Rq֣`OLrbFLoo5W͌o+&Vv~SF4vׇ?}%mfJwg[ p!~ "/`?"o9CnY|_]uye{E+쭖{} -2Pť^|#A\gےߦ?sg`!NVDZ\(0k 5}pr*a#TI6Ӝ탵.R<шT5wz׭O[OD)nO/SOVp ;넑VKN;xtHRpe yJS1<eOKtﻃJf3j}s:%?}Xĸ8x1v "_Yl$uyvkpِkU*&hU1Aj'hrbŸuts7@uJC4 ٘z ݯ~gx8[*V&>kYU̚bDΚx>U[>U[%\ŜyaeІgG#6Lb$x?b)]D*&BWD6/rs[XmcUV1V[cmBsò@l ͐zxKhnR+XMqZ<*V1OjfL1jUŁ֓vVF [QX h0̸S[p -h.bș,Z`Pʚ0AY`SeO!h؝,r/@ t J"Ԝ[" B :˘$CCN筽`g*SpA9 CQ% #'n]Fbܺ{Gxgs X`i&x&NIG1[jEJDz1#=ý(}2ȃKɅs]kGP$1 `${(Z꺙 \:.nn?6gϻX!S%n;le^M+K7{w'XeSW>.r2̚A\yR+" OxB 10i5ÿD|} ,c`d#Y#_&> E?fI|ZA/t_1R[jH_">dCO%.F ^%αTz\W<}=v)r\tsD*񠤙ꓹ6?mGF⇍ۄ]Is>h?vS=Y 9QDB)|zL~k6 ^~Ԕ회#lQEْfh_-,)22 YPQY(Dzc$aIʫKZ)'P]EXJ'*AպSuMp<U-\M d^b$;Rdeqe\0q9膟 dU;cjzJø[$["E\6:G؂*ڰl!*lt"Հe*[US$X٬[huRbfO)’?R% .h:I\d?!ل\ GOp)Rbr?OSx)%I ی6W(gGȔZI:<Ԩ \ܘ,G$fj`Q7SKQpLcUR UTtH͉r<|4K2+G'{ zu.USaQ?6A^~ xHh0Ky4(wRga2\ۛf}iV˛7ojEfJ:+[I<Ӹ6&bkl I傐p>]C%9d׌z]X69r@̬‫Rr0H0$՜ &D1-:Cˢ~֔fOzvb :G5eF$@ĽS ` k圕NX{ZB]X?5F \@֔0jRkp\l$PO F՜J˃A]+FBH*w-m$Gn~?:㲉/^3Ev |5%Ző8 "35U_}SKF8$WpuП|`Nf.z1O]uxwLiɍ8N~֏~w5{1ĩ/g>sMwVϿW{F9\:-h\_=1F]xϒn#%eB %ilhk,]iIfdq+2iB||_-")|f4n֮UrNڷV5YիEi*|+܉:Z5Γʐ$iH.tRV4Dƶ\k9U-uRB29|&ZBȽ)(+#-B5Uc7 m?dSt\Ph+UIN$Q6WвWGB䘬V0Lv\SXm`WuW\4Gy2"UӮj<5Q"EUz;Tj"Kϥ[42~iM `ܐzD9?ɼ&P ;JcH*FIݪL-S\PmмpSWxR pΦТZZkfHUFY*daA(bAB{ ׿Jzx\ qt9`C&a3셧Hm!P(-w(Z@pvA6%1MUɊف;5 o!}j<n8"Ԡ< E aQ¦bJER%ĐE-5k,Ri"kg¥+,լ) {$଀AYrE\=yO#I_JQzHay ED N$E,kx ~~P7`&qyg`owq'\,#p/*ZA&6V:@RAрy8Tk0-#D3:!BHmA4E;x<:Z["X!Ԧ*|cRE3S]!LjI', P)l[ي\ JHfT @68+B!I_{\\p*`< 5 Lj:(5ՉPA!t@jYN)^aɠ4   J_jZdv@b*:n0&pw$ڵD-V(@L:=46#]3c ̈!2ғ ]Hp :&"W ?la!K HA@eFYHTk44XQ!P4! uF\`#(# kYpA5k7Y"`elC^J7S^",C)9hD2ZuIB0wp āJ㮑b:;Su$1[3DY@3 .=V`&An( ΝDaV][ZSQa (U+ yY-yp4h&f c 0A~kҌؒUAAk{h|'ܥ{A(WY w]z  *1#J[Ƞ rsܶa-a'Ö T gW Z)(oL1& d^ ,BxyY0r`R 8-*1"&ȣ8400lkA砀P ȣ4/T;FAA=8N낑gj Ƒ[;  "IQ[/AsY(X6UK.d?uN^ki8?ThOC>xr!LԪYC8Nfi~aOo(‡ʺXʺW*sNUw MuU#zYbI~M&tğA유2=ϼgv]@G_R9Y{?|^}y: r]kU&]`ZuKA_p g?MvBGr}6[,ܣmK\(WH'- hw%\]GWO1 N>zgN:hwOZj%RYS3}8=OGӆTdmd>x|*<@)?I *fe&D7Xڬ~#E!ѻ6ꄷ6%\ikŜ{o %)2\ho?AImLyDsGfyGKuGy3F3;]ya#)YvNCq#ss6 NuRЬ+;4[Pwx8)<`!ԝVȁm1vl4K,\憈6ܮY^ SPӣ%Wk,DعGݎTh&#..eG\dNdhh՞taO*'jyUU |yE dz6γ6zIۙFWU';]Y}czsZ[Z.W`l۫[-=bqCny]a"^aV_aejJL"5ku/1P{V-6 ?~{VSw>+ :Z^?_Ly'b-9's {>8i*Z̧q#"f -sm<~:Js%fc,Ԝ} %ՌRʛFw]7T&ddۦGݶW^h2nKOiuڥ 3:kkZwȏ_zFg_%zmjH7GCch?~w.z[mJ^ h;o~ - JD-*h'*RrI)F?|?<Ϸ>9s-5$G^\ir0xI.P8B%:auFZXsP͜樦gp3?Ksbti0ꉎK9n78_[Rͫ햛}6(nr>b*Fk^_}Ř0u(Xl af;k!iAlƕ?,ڃB1◣N)>'GKNus<C}&jzCrd]2YFŵ1D 7qn1k aaߩw1|90Lg0\E<]ەAiΉj>HË!zCl'8|a($*L3 .tp! tT'(T1h%"U _$דXqײH%%W2'r%Y&k *{&X;U̚~^yo]Q 4KB'nEI8I );l1bo$[bMӝGݛ* q72i'Y?b%A溙iK6QQݎB9dFפzQZUGN\q*:jt3M3GoF4^[ח^]|,MxOZiOg\G_/ ƾNL k;9̹3\+ǥ%&ESO7QM.^n>`*6|9`?_8=l1yvD3-vxXu}ȺÚהQԅWl\aRpLcS9g 2}`]Drv( \HNm1`R `kz/ulur0wV!Bf!; xk7ځ|ߏLbɍY 2s[W_oɌxO&Yjsa/؅nF^KE[Ǻ5m'W&]|zm._Gz'g]=crG08埮sGu~ݰν?>Wnݺ狫=z3]\/T,T-0VWX滈br]+ WWY`WYpq/"}z :µzۯR=9d xj.`йlnIWe[&4P.m=9:l>l>l>lޣmOه0{y6v.wmB(\ksw-b`+;xU5v+k?0&DXT 8rӯƳ9]\6ʖޣ58ݐm`0?;Nfk#7x3 2O&GK\-Y7¹7[1lɽ; $P pqee.eg-roԬ]wg[ .սj-TczsY:?/s1z'7]}۽" 3 !?x)f{{5ٽ@>$uctDL㯆n) !?x)_<@7åtMR=AӖxit=Mm SONrW\xnRvn}}I6趻(TV^ztSt"C~S{,tMR=Aݭ[#uK[nnCdrJH>[_l:} %zOztSt"C~}~NI)my'{g$uջcvlT;SqZ>r>hZ\~>:S1zJ6ϼ+ ֿӹhj®[Ֆdwl ;$'b|_6,$|\t~=Rn=pΫ˪sg^ǿtpLWǬ|<OjrYɿ|0cɿbu6|\_|>aih-+7$gn׃۵rC Ba8Wm"$= Ā<*FT i:T@Mf7=H<8>OXZsxea͂G>ޑ Y$)ԄI`Å%RjwDQs1 R G gxF8\ N^zc6L`*ˆl^&&9&9ɗ<Zu@ A=/^31$+`΂:q_(}areB!,*.sse+]ম i,P>83l$fyH AKՕF`,u{m<Gjcn&XLF õِI@u8J32!mA (B`)<6g5 AJ%f$H."2 Z&979aTu"_X|/@Y3~nV0dF d*325+N",#, #|ܤN4X($2W@_f.if(2Ю :bi,Vhn:Of@D)c" O U 9swN,LY@ 7X  Y0!3X0RWBE꼵y0[>^<0,d+cCX'X.z+Pal,UBPdU:Htҫ TFM `]R%m֩A9(  %- HA+N PA.ʀG>y^#j#:LYZ<(= D--;䍆""DI'vPfmy s0@uX5VcI(0JcX`#Gl;8cJ/LLSkP߸V7C'hU@B <` ;c! ~pCgt)>DKZPswj1GA;!0! e@@FJsPz Vp=rz!A A+8p>`B̜Fk`qr،Ҋ|+&$Y8 +fCAH,,CmY"o5?@[r_(%4l͉u 訑W4l݃I%|c*$7Fl1Zxcq)`e!,Bjۛ~xyQ wgwy b]U 05o3?5OW.vdqg-@ǜ_UYL+" Ye%<,20qCx0_DEotEec2wlA%%rvVC<h?ݛM2 RG@o@N>`=ڱ|Ϳ}~~Gp~jz_7DL45uB8ï"Ϳߛ6xji~. §X'wx͡sQypbs8}·?[y,^kߧ.MFvWSn<6,MO& ucViMǤOo@}aqVחionhO* &v e6/L,1sc^÷fޤZgd7~_2]C~a 2[DWuӕĥw_yhrP1~qQH}6X:fW*m7ǂf~[31,Hұờ}NL1wLg?C|OhR~ ګ B0 Z yIAqjb%J*eA7D0 gTiZԜ|>>j>X|;ȳݛ|6 }UF>qYTO->A(Z_#GGHk]{} ((L3PQdzDʌZ?l9L8<"dX :<JzZ$%.1< -M+9됀;0됀:0 ف_\;0aڦJR% (ĈJ3$skƉ@ ]=7}I ZW :JҀu)ilWj.bp=B&$OP26^%Xh[!q|RZ(e4"fნ1 V 0`uhaF7PEILbe*M-9%԰k$( %"3G6$R2e1̬U o ` LHf>^}K zc=y( $ cc,1OC[NYo? $]QFa쿹d?ôa`5ܜ1|JFEa((ѥ!nXdzA8WnY䕹(HvYݛ+ցw,̦G}rۜ;fQBxZHm$ŊK0n/҂y"՘j Qfh%2qIG (k1#8#dB´f[+~-%/ LTԼyD=eG%QT"Y5beF< `xJК+ăUX "tGB'8h +sMd]hLDHTĶUy䏥%Z%[ Ԝmx -Yh<}Npl!Ce8yq8k  ,up^ro+ d6ܘW<셪! wC?^Gaaf6]Z? 5y]2 ZݒlC)sq=ֲ ha~841`s Tw_VepZ:Lg>_.9hOKoo>}>CK7YŰԿAN:]Yp3͚lߛ]{7y[M|mt*Da~@GQh薷Uwa? 4GL?DU続: &^.9UO& Sڌyj Zdl8?tPywFy? g sY&$g'}7tj5 Yi`~BRPxT @1uq|j5 ?'dxjX#yfR98UYTab?4`7F3siЇdl^#{myQ![g ݟ;$:lmMSc?<`m ^[T`/:@"5K E'QuA$?2T]mX>FEQWgZyyE=Eg5[;28kW\VL[wK} MiJda㷝2smݐ (-gyq9v$rXOZByӤHl/aw_}v)r.R7E%&yCCZdw@wvgY <"#z;x(du]PbѺ_US>yسo采q O祼աW%W]P4f#۩o,E\r@H[<*G*p]RJݕeWh mQwNҁ"rԡ;+ks/ͣSZKE w;=Ae=aBdxĿql&s.s-w bqۙ|Fn^Lv'0ui<]M4r؈%RDxyJXMϣBWuC 367wB|P\m6fԈ& .w=_q`rD$E$ V-#fy/n=({4+N ̍4bH4D2(qVS坟<'?ӼPB N2STaIoO ^r&$Wn}m8bA& a gx$YZ\E,ybթ[M;Kk 󹟦lІySffFKl^9 6h!JBtZڕlXel6P*JcxJp\EBul)FKc%x Q3xH[ w(B*TZS=H5%ף4~/MM–X^Dhap4 ;@|ݗ00?(yswSpt9SmzQO}_ qGu1pqӏ/DsN"GUX᩷2\2q[~,A )JRZ? msq~ G?{P/V$0RB5Hk ܴd%b ny|?lfc'h[(JJ~0bpfzG|DQ|:*Q@;P|nKryX7?d`'oBLJC.NRp-!Pq8[?9^Ek,5qLiZ^ 8#~#̧|TyVޏ'w 6:Z*5UO Zn[{! 7XqP3a}y&6c3YH0JL"$<ILes6dǠ'h: 1&͸nP#PRƱj8m҂;@4Rr-Sg6tb 87^yV5sUF=7c)BKdBsڨN (K{A?2R21a w=QOW5>O^y-7ƿ$v+CJuH1n&KiӐ//Hƈ6,,qAlTLRK儋"ʾ C'g/^})1YyfΓnU=|$ވ:X@M[+ۇM흃z؂Ih3>MvRz7a3֬/ `u+3TyQ#z"ObB,"װ1yi4~N7g9wHaK  D46?{۶+>]q(b_Q\m/%zE$KRJrWO@Q$E{j'OHCmHI(El}mJx{b '1#gU7,@m啭;R9b 8 mqC\cLi/H昍|||/C?Y ौCdzexl;y{$e dtsAtsz~J!%:A}y^v HI9%|2Aw|u<^@ʑ *%ZY$$mU3[`.| ɠ=Ju{N ވLܳ+o:i=Qր ˫l$c?~uyP.ܙLYE6aB|uN~s)辅.-fҩ5Uʮ$͒x9C?>^eď/<JQ͚E.XN+]P#-\_n ca*GncQbԑt5Շ[Z[1o 1JivZ7\BHXݳ524ggt&.dw %a]F,Pmc5=k#2M*(u|x5q6oOWw`3mxt>RCc9jWj%Ѹk$ f%V 0$柦s剶^B&J )(rYpr؄D II[M+U>-q4^|!{!R^+~t>ʰ S=;vx/Zr0:BMlFN3-硦^>W}eNӽ^0ϔŋKWZϦ7˛k `Vovs?;ZAWw zxvGJ  ߁~o0/ȷ뾺 WoFbbjLa2Qk`70uo|wӎ~I>Sn0u|2JoK ѡS.͟^>c꺯mMtsRDG$ W M~`B*BǛKc*%`By=Q̑J?jÌ镒mּ\f< TY R(($E ;aŚs@z7NǛa7J~u~2I:B$FaK$A1cI @ J 8{NtnG #.َ'^Mq7Jldbs:0dT2YG'4i/xtVuD]#JR!I$(RA;t ?x8Owy@5C .X)b((0o"$jx/VRҿ/zJ(eBx>tۆi{d(yM2CIVB$3ue0ԩz :ງEMB3)ž Xcnf0,p7YwI6\[?n4Lsg32 ff/ۀ%’K@LS*F zoRoHm䴼_Xa19 &T(P"*nF3e3;P{$țSN*GőPuvpUj#L8$yMmSڟxDsejRV0μV_o_5r$ =t0jT8dQ7r$v":+rnJ8,΃Tqh$H8lA(4j֋t~O$P79*~M_L<Ks?qV{K6 <;y@Nye*c'< q#c_!c U7:=SLqaG"(YJN 3p^E8]5z32DAK=ᬵ- VՓ iGzς?- %'f71**INON혜۟z* -wBNE޷^&A{ͽk~eRSb~RjÕi|EҒх a]BRCOzD<J=!|;fS%㽷I/I%gi̱z/'6FO8 ${:uxκ+[P_~?ٶA(lwK; =l;^ǻky6;"6wCTD8"ٚ'tvLMOMMhO1M?vM;l<$(X#r*SO91U2PIwwNRK֭ *^.zeLȎIqv4:&0:#!]=Q$6Z1dakyoDakJ S<(@c*J*۱Q'9i+a߻3U;ØivND±.ZmbN1v.]i~b3r?E2 ΋SU8H}Ӷ7?xM):L8[?SΥ!-5KbgFjp5qn=qSi0~%Q?q +dZmFfg>b1E8 1ԭ!!>Go]8>B7ݜܢk"Tfp5 -cG|S{0R 30a33g97f,9d5t,(ëͦWK{J.5susRe#AYM83 ȅYאu௺fRD&fOGi.?^/xCG]Wd.F<nNS~fbZ|U2Gͪ1,1e f|`ˈP(`Q$HD1$@@%2FPnN/\#Jm=⭉=2+֘v7z,UO^-PUFOº_Z-,ZoSDbLj! 4$<0IeF2eptI"2#r+̴ܳvblS..-2fף4(0M糏ٿx N5e0Zϋds7\߶ArDƆVL3M 문qNaFI`A51@ 'Px&<%0]Ѐ 2G3Ͷ9U !$/lFoӗa%pcR#Ldp|X0~3 )!;K^ $Ƃ!c)҇1$D؀51yj$2dP޿O#Zza< 6̈ȲwkFE$0cY_4\~셽{Ӟ̰k7244 @#I4!44QR\>)nk5@A/P}>oδQ#6ZhNJ $ |"uYׄ_1N;>չzYd gC?"0*s{Q:y+Ab`znGbpm[.#'Np9#y UHOZ)/%Ms=oMmq3&pȽDNcΓ6o3[dYqㅇL28UCpu{ٹ=Q J<<_=ɘnx{h 2d6N203ۣe\MXn[Im}1jtn`yO &Uql5?Nr?T<Ζh\p`ߓςY@L$rq&4sU>h "i0B)ygڹ$QCԺ:,RX_:cq, py( ň\^ #?u'%$;ͥg-x YisH@kbD/h:~q+=¿nb8 Alι;訷]2H! LjB&L87'l9\WfIC WU-l}ua- zaQ8Ih$ݩp8]-e% [w?/qY>* DAO*~&Zu;R-]iQ1@J$)uc Vv.HjEM$ŅiظN ԫ˝N7mW "eyEj+ 9SMЄSY1RY:d\5DYdUlv۬gFsGY-U{[\.9^sk_Al1 縸Pgt ]Twl; r߽ջxӤ g7gZ7ڱY:5؞h?\~Lv?Ջ۬1u^ɟA=i)3"Uq7>xHh z3uOB,3"H8H)($s.-N$]j;4tӍ+~;7\UR>X.gtq!p'5Xw @)EI]s蠰{ЛM~z6kq1kVXcdmC6P֏W{CBm@/ٮF9uքi5hO\݅ZbwLV2ˇE( 􊴩 TDmt>gq/Ϥs׽ߦh>Sw(Q2pw`blk盜.~#\#t_LEvImqĒmuwYCvR2oSɌ\7V] Wy"kDsOJZݧKy˜8EwxJ)ytdjjiZ}kZmhiaqr lO4`S,BV67V oi v!JagDZOhjymTeZUoy8)oi~OF40RQ@$bA(9aiA ´MGweqHzYΤ3HC,3mxمSִv,/YUT1YReRIF|`0 KA_,z<3ڔ'ݭ՘5aLRVP݋s`/"v4=*FE0RJZo+fDk&~Ԛե4.+MpO8RP:B@"αbu)FBܡoJʘ $JE 8>J!I؊5&4kH)'7ѶAQJVL+D@S#41Ca"pCo`bAڋOnoM5'Epb7ygO]U=!~#M>_hTB zO5k썵۴a~c.%7.E}=ڢp5uxwOe;V1# |R3ԟ_lf=x<.7ٰrґA8OjnpHсX 'qވGq4C+# V'D5~Q\xhpFyNc1ĹЁhk2)N`g m2agwV0ƃɛɛxɛX)sX?D!ZbL~,֮C +'K^12О]BA[/#q)K; CI*N2{d`5ѻSŎyʻ)T7wGR{ӀPDv)ZǭK?ňa>p#yp(Bҹ4륷-?bhN^ . [3CxsKCo䃌wP2npXTJX@{/}"Pw-޻9]h%?m_{K & gX9DS mD: isH)+"N:b" 5yjoZ,RyQa[F_)DebPݒ^!18(J,jIGHrkH@38r\!OdA9ƪ:?,:J1 't6#RT.$䕋hLRZi-` jHƘ؂F en8cKdn1`ѯ8Fl })rTimvԣۗ2w.$䕋hLB Nh<w\X(\q74jE|1y~yC9 1# ~0 ߽ߍ ƅpY^N퇕ًZr6e$Ѷeg⣖K?Y(hѦR7EHZJ_M㧸+=@AIPt[v슖5 NKWRx@6 l 6OLکodT;ZBY1%#8j"ഘoRX1 R _mJ9|>{(ַmGn*n~la J"+8EBo}54Zӫ_-^譟U^@=vݔAԭfAkDŽ)SP)7ZKYECQ)vܬY~n[J#+.U܃^f~%$nv-(KSO .a'VїEI"~ ~wwsn2|*M$#+%)kh|A2U/bZ`ф"Jad1`[!LCŰ@T&hqrh qÓEN1: vSj}]mJ yfY n}ٜW좝q9ՓNhY1}E,\xo+h+8,$FrA㜌;NڣdT ˵ݮ,9`>ac'XG ææ%Uop|%(LGlgi8KY8B{b,@L‚גĢj@ǒHb9,Giϭ G?'j>M1yxs!ZqT:Лy:R!m^k/pŃo-9e 奺X`:gZS`Tc`-GA( (0=:!DF9t^I-#&tnw+,_|(ceqXepXSц+KF2%,&q G%g_ ߑycIփ(09fI 4 F&z+hR00qDz(NT C)_K72RZC邤NQ3%{G /gq:KɝNس _&9xO/stS|T?ü%ө"Q?||xq+E~qWD nsUQs r2o㧸=Bow.-s3g?\_-EGH}&0B㙽};ٸW@x| SU BKUdj'^#RG[L ^%JIIU wk7 sӶ%VOV ۟J&]L%{0Σ~Gq~?5=EoNO3n|2-EKؐLlȉRjcKz[r6I֖;/F3KO%3v *yg%<0TڮkSen5j DPۮfxhZqaѐdt7`+k{9vyV)Q!ᘫ!?^_7%,'5[zNiaB䭔RW,8[e}Eə"%: ]PD 7fRD-F* WY6eL`͍.?3rwo]҈>n6[ogȯ}Hf3^_/ϣi'G/_\?=+9?.OK|r{D,aCGgT&rwI#S7&0uME>YmVT8Tb<4^̀*\ arJwsp-G8>NM㪾KG=?Eg6UߧmͥƓޞ~Hq'6<:f}<гKWi //ȍeN}45U 1Sn!%fNT/*ScNiCK\ ZBL?>FM 06[I ?AӔb5lOB0k2RHOR"it`Xg,L)Gț%ۊǴ<6 iN, <|oYddzZYV(_@rRPWc|<96@m(/Q#N퐼x (\֥v4w ݵrٰ#M(rn'qu&"0g;!)^8Zlv $%TO+|>=^7qw 畻>, AzbӼoR!5]mRM6Zk|Ho$b{rbbs u%+1\x Ji1y{+7gUOE0.Nl}MD c \ FsTe3E Rq3ae0Iԯ2޽^lw73iQ{4" ڑ!z{z2-S֚?=?ߞ2[j'p6xN0J$<WZcCxk<$ `q8Ʀé= ac5۷\5L ȈSUg`^ y ^g"4|?_H%`"gs6_Abϒ>K*,賺 "pPM qh<ԃF8)t20\)}6K*Y:tLi'3\K+鿑.%y.)"Vhń2#e1q`MPEn8i~TT{^RMUsv,whcx)#Pb"yP(YfVj1Vra)esŭM KΣZEP(YoXlݦX21Ep5;ȉMHzl HJFמ!s)3H UI"[-q)DPqw} ?.EXym*䠡4lO!sÚy?8FĔP@`s4AfRw@&E:de3JU:+0AZij~WoKt (._HY5`Dex rHx8^> qP^-|lWSQ(HA+' ahH+Jj: 2NS+Wߦ|I Z}1v;ZĂ/&6<.ړmeP3(:]iu=+tiUI{Z4k偎wv\pPr GWr{7k(2*3%SLk:yr2ᔦWj7Ȧ`T%2 /95(v/aakl&9G{޴[ANjP{%ī`ȗ:X!&_khm;e-7<HjaaĻϽrvt1hD]^nFO;D-p2fs0^^ęޏ?ҁy3%ML~XukKS7!..Ƣ1$7~z|Rc\~V$FKK2KOq?ꕳZsd| /AJ*!5E\Y)q&&)ṛ)3 xMFVhUIy.O}iT/< {W36Mz\qBt@nrK]9R]$$j/\uP1D,S&eLIu(вRd,=Al5`\rĦ5tYp4-|y))ޔAf*ԡ$^[vQm>e;)d/J40d9yPt$W3o7G s<΢kdOnG_˻7B=h v_1W)ɦhXDu@Db'K)dIeyH[E^*KaUŋIyq ǒ|xZ8cM"Ζ1NPU$8 qxDLcӺb5Ez :i4if)5ވ0!Eh'Huyt{ӻ >+H)Ӣ۞׽>!?Ą$,-l)GRջj.Rڝ$KəkZTwxx_[}FOuX RHHwPgu&^~/b5FlXgc, 򫹖H NjJpE02d62KPqS2rc#W2oH"s9ItE"A.gyGK"і1^xs2D#"66m4[|~,)g)p4x nIY2MhUnV>m A +C4}5Զ?ܤvj]<)Tlq<@B(JH?$r&p’fyPr5Al‰#|6nn?Y-EvHϤ +p,Dd7P \HZq38H(.AhӎRrvuWz6EX6JFVd|Ȳ4UJBJ k ?sᙒytXΆ0Ao/sQc0pVʖӔR!IW6tuޯMɻA%$̣\ϔqd)QaM!Ӓ❿0 7OE퇟jtC,y\JMbK6B y"lG dZuF}v_Qw7ēt}{5#D,_MYkH)}p4Ǘ;7OgчeӿE|8_vDiA6ݻ9-0/a2dz<1ۙ>lࣼGyf:}.i{P[RQ9<5 "ADE8YOe{wz:mVoDu̚WvOgĮdd..rKÖëM!UM^%p7Fs@.\H[8fs@N.+h _?X_@?wϤؘ׶XAd̼]eQ<$82p]~kd]X"X.*2Zd",y*rϞ0Qͳ&?|Gϙ0]psR‚VA9n4Z†{﯆pE{z AGCr+힞;ў> czt{"Yϼ\2νP-fnz]#&n/ﱛ{6/dـ &'W\ؗJ)aSj63,aNoGQ pcG C>Dr pUHRw˗ ѵAkgM-ѿfݮqN5ڈo%N9Qi>l9HX&kVn^C]A 0u.ZZYXoG5W ˹RN(6̵T*er5tYYIf;J| }ikD)q8dM{DS] XMu%^3 Ȣ;ԙ{118!`t^Pnj46%k(NW$R$LWӰ9]Ru\kjXh4nYGfR<723⡣”W+{d2ө9VW{fJ[ax~ r?jj%<7֕ nABSqT0=2zEr@MߙzKgcɃګF '9L|PQ-ZTbusl(YjaV"]X*z40#ɛ@6gW&># Z<Nnǃ&硟s Ŕ=i8p\|Pݨ w_/{(!A()yhP,)nWڔ 87džYHuZ(+n}>{Ѣd>>r/ҹe4J&5yRj$7+5yjA`+9vj.I<MNõ^MͷF0Z h4VVMJOASZS+tTK/IA:F"w [QKf]%ylſVK-$2N5S# 2JtaJ"(/r:pH? % ky^ [CQ;#8O̍+ ORBO\jc?m0 K{@#P|5fW=eXS)"*^Z,WмԐGIK(i">HxRcQ IfsC3X/ ֦{3Xm+M$9}1-qb޽mkv_-G:?n{)E?V{jpO}Lǹv/q-,ѕY\Ѩu~/$0e`$-npaCN:g[X鷺b6H0W[H,%Nj_}2;&f{[S$ {-Ewh825j\p-#3Ղ4Cil@k|kڎP2SrqU2V[st>?I2#|ku4]VVkDtPDqQ$sY6U F{du ֠etbl{L>C&- o,8ҙ}t-2Mg)"#dI`p8Y=`"ȥTu)!3B=B^٫.m$](|&C18`9jHd^Ti29c9$";M1&".\ '7?fΔѿɵ#y]ը96k Z#)gZɑb̗ AZK(2r{8 l>]sƷ۱(veխXn5_bێQ!gF؄dv05ppS#sdgm2%SG +*$9}pnCIL'협~h/%۟^oUngJ7;b !t;8O1efӁEϔV0 }|}=Qsp},[NkKPAΔPaoъ}!FzW٤Kȼq(az5E~FONgQ349vdDo'5S2(nN7#@pk.F"2߽\eHZHФznoR;cءݻuzCx{GC8e_i*%݄8r|?1p»=( oLwyhC wyx:YcVܗ(Β|nWOWRvaH,p4i>vǻΗe|Y:_:;Ŵ<{4d;΢R9A0!2>G}{33WZ2.,`"^ɒnCR-LHJ7 2&4B g/)9Ud\dBmr4et: K!S)CDn{(ERq*F 6Uimk'C f{?$;sҐ5VVv8cu w"H1k6T]sj[Ưe _Acám- dH F(i34$_-[+\_rqg{~V,U$1['َpI*  ;jQ94V;|4HJH1xr;:H8Z]QŽt!7n&u];('h/@ @){1ɯׂP`m!nJ0jd]b"jZӒ>P%jګDpLY\1&#Yx5rv5i h"@R"B$^|9{gF!s'BQ p)P!^ʃ6@1jd2<'RɈIEJy2 |92LJpz!pݖS'>q1 y}RW6(H& rFy>rR^Ƭ$}u& JV"iZ5NNm="glS(Ņ|MhLxg{v; ֳg'80',p3䕓eKs9FMxR,swkQRImJ^A/qą- a^O!B4&scN =GS4X"zHLD\v#FCݪj20dldUqȐU+[i$6CN|G.tiTe;Hty\Ydhm:m@N ;CUۨu}ng?p#A;Ub B:,kÆ+B*>!5 {N_ 7V㬚뫜¯:5[S5|ui`/Zܬ\+4I0hݝOwvzPZ9O~7o"~%Կi Zq{-y/aVoK$rg9Z 6ܥcrStZ(}2Ff>7sS2M1+7fSh[K8CLJ"\4[VQpO9x[tܲ i^|dT;lw99 ٠ioBBq82I!Dw 8k<$61 x,0&0y+8F^n(H'=\qsۍ܎+(HQ>{rix" f=(q)ڊKsnlML܃`0C#FhۚH de&D0e̕b G'#FYɍ,A%VcoԮl7f3âi({W%\GWG&R6#&^3!$gEO;'%[㮒)!@#VxU;ߺ5dĒjUtl$.NνQSҪWr(ƕE׹ݽ>s`C|y2қ+pN+ov=@7 s0KqKaADhY[?N.f}'?\?|Ym?-CЯH?=78A/Y,CBIu\|^W7${tVd+!A#KX9fzW<ʶ䣿W䱗FL(8f@rA 6L۱ 4+cvaeuM@ZΫް_/0[sH4%WfE٘y >Kd#>X6eB,:~z6@xPSz-=zhiJSguU\v^dEe*SLt]()ޕ4$뿂xƐWFѬ/wr,(UeL?\}KD4;٧BS߳çO3fcoÇHDrx!|^o´fl@(o?WCI1%s"efTwWVT&&kS5ϡhIC\E)͙bl(vT3ѥ_׮;.v\qy,"~? &׻'7_T?'iIڡ{ڶ~wz~on'c{hv8hƺ[^m%T8j(Fː/s&`I\=,hG<6 B/כŒ`d~d651m`lX?p!r!0b}QK_a>hf=Toi36 PiDUpci Da6R&QG-$ y&t&`+Dv$򚘴s9&$ki4CaIB3ZFl=TΔTvSնq (0ø@r H4#y;C"{1%k!U  aP"KHw>[ɨVp^4#|[캞dz m8Li5L9µ݌ "vC#BBR,%"Aq`g2z({:(opsƔ{gR`NxL%{VzMbw%ø2ZΌ G͸ʋ^K0$ ˘ {lp31r6e1Wk"Ro~[)HWEo}o{?NfOVO*c^1Hs^R/ӐOdf:A9֐"[%%/cRJQﲖ:z}sl O4TBk-F᭑"gmIqCIBKEZQ_ԇh`բetc qɷ/rԼDyGL m<<,#S m]Y`SZ #H18&#G(/8ww nll&_]|NnWw+~YMvM3)$қѓ[xV~^J2r*H l8ǽ[ijU)~]&%wfn4$| O4Lrc Xu0A@*AS*[[=C"@nK{ ]@SI+ޢքmA4DaX-ft+!;'_4a 7=;:RW=FJ<;kJĄ'ẽ{ia: {lugC*{n:Za?| sܭ&qR6߅"?{ 71+AëTYhL{XN7jv0Ļ%f:1/]@x%3p~tw=+4Čtdbr=pXՃXn^\W`;NB'7[鵃w1{vw).lb݃v_i Lh4_l|T3 17ԓ׶.+vºhC+[}wV;PfJA؍>;(Ÿ&t׏揕l4o yS'0/M{c vMv@9ï &=+ Hs×ɞ?RyecC[.Dnli[n sbVwqP`{xp{h mJ u#Ǝämä֋ІFF蝮ǖPE؆1Sח/[A1FF?;̨P6<]b{M7^sI\j.qK_J0 ij0L;7٪&2PA kﺩ1\"6;/^Vxȅ%D%j[/knG7?|8hNk? a~03gc3iMjoW(}O m:jn):*6IlN9|xho-\-u*{d?;'g)sB,{r%h("sѦ+D:-"jjwm.n6h-O.oϯ@.@Gn2$G;Wj$߯;7dBԽԶn ˂vuu۩L/흪&d*T[SN{b FP]+jMKu}(dfQ0_.;x0Y46rM).o~=, I߬+gliZMѬ#{A $vGUۚjiF]:^ZR5;77n^*EߓR)0 |2;P %zRT :z2u[LD*'?Hp$\%Ẽ$Ij j$0ϥeq ƜSGϤƌ,Iy-y*wUa`Пg잂pcࡶP@gĄ{ EMAopFۤD0hqáGLs.Jv7eZi:@TK^#T+Ŗ+dtQVHt۾\Z:A+Q$/[?%#'tC>Ŝ_ jKΓ mv0I07Mf}4;gfdqQx#vaBgoh:OUEٷA0֑Ԙ^w&%VgWp)AdٺOuŠĎu;ܱV੐{nU+Z:$䅋hLi|'t+ St[,UD'vMN]O|>[E4J:U&qL#1G`QS3SugZpR?cX7KWhb1":clb2f{=UnuH (b^n cnT16n{$2ެ[|7֭ y"%SD}.u4JnLxUXugZpR'cZnI}-J9F`y`SG-#kUVp)">pպM>uŠĎu;\hhGsuCB^FԶټ{A^nET!J169\^9TϺwURp)uw&6kS@<`4A6́UMRTuml nZZ)VjƨiU36ëS0V&{\ڜE0Ei[jϚm~ ⍫ To ԡ \˦ՙm@7P&(W m1ש -͋16Ƙkظ4/L`6Ƙkp3{'1c&LĘc6"bD3ڼ3CƘk$j^)scU͋1sLqcncujL7/m1ת l}!c\nzcW$i`= mEĘ1f5icmNM70,n11z5Q޼(gж16\&LcjyGmEĘ]b̰6ƘuV!c̒1ۢ1$L4/,j1z5#Ҽ16\&c6ƘkQb !Ƙsж1,m1GhRw ~5x0_Gͤ aNWt\a_{@gFa;'}pa8 Z 1ا+cX cFQg0?ma, {eΑGjwfC- g% \јDZ1vi4ht4-貰EgM&Ql >#O^WiL{L}ZJ=+BHƉǍ*,eVh/Ky9hM`gsnQsUGŗ!8MܸVE@uϙM`4-Ip+qH- ;Vq#ĸhT3D.DXR4F%WYf:$ymvE~6#UAo$CȐՄ朁eļr!|\g(t9Rj2)εq47Lg@.c{ DB(Y!B&1ǒ(AC:K’c~YcN! d".e:̬ͩC*^#)s!, iY!aNpfX-!X _!5^]Jцn7/ä\MLmHFfÚELl 0*D}&iӰse< 3.əK~ڀH zY,d8ws0y|V&! 9;L?QT`\q,L>˅(dA-7K> ޹`謅kyL\ ̍Ksi ~8-YIzpYsbvio2K?t(^GD1WIwz n%3fEL)܋~4x;w}g^HaVaj\cSɼ6e 713uM`sYLy)wN _;yiUZotGS >Xyq Rg rghXy_SLr3[0ݵ.̑X =s?h9ET8f*pb/$ g!# ΕwdVRX"j"&R`O-L ^-JO&2mH-\UBF BT1ng4 z%L[ ZmelRaC$vqb0VZؚRKᒇ/L'!J3.e˥HR7ѠETx1y*zßgJ)W<$!># xd"([a&*6ov{[E8V %&dQő^װ7z4cKg&o\Op ՛lF٨|: lcϜ˂4"Ћ/^UwAD 4G.ux27oŞ7AO9w"~g0iA֚>|9;; C]$I>C7߄Bw$>bp+0ݲ='X!3N Y_ec& YM Xk" ߞj~S-Uf-m|[Y~v6HAMNjR2N?l:+ ;\gYb dyxoxkh.b1 lswG;y+7Ixܽ߁_׀_C?~ߐ5NMI{{Ŀҥ~ 2(gkbb( m(l?!*}yP {:D@"hO8\ݛ0@!)!枮Fn<֥֜$p6_9;ըBZމ>LR-]_̣`RD"R:_;uЍ=?&Ms.[=̙`q9 ,(E r8J>մM [Te~컷DrqrZ9q,Vj:ZT{._S"[J_p}f˯] 6&:觟\v;|H\Ye*>{?0`=?ňr"a\i=8Ib%#Z.Ə +T ?XT8¥h%fS~āE/_Hj!AP\qa~NFY&Ka8eTp\<܊g-Uء2b+vٲWd-rJ#QJRFM3T3#f=w+?T[p7S ع[Iqk_Fw1Gm+)n}6/R*ƛ|R+ঘOȻy;ȯvdKE)JCgZ9BKֹ#U $\C侬As-Ip{)J%DRգcM531,;pKrv4 *L+d :EqʘlM R:OJ(":B4*D7oFgϣ[,<4:m TenEa]x^\bKwKb_1>RկڰjVq8P}{y[ݢ{M6ĨQV0M6& k'G ?*[v'j nˑMc+E28;dt2%3 tϿZNɽ xA#'[t.|9ÀIhd(P5*>f/M΢㡵W*4Zj3 N8p>#l1`h<Cœ'ls.~$A(u GfWG9.uV=+}3턹1*\KZ60vA[ujud׉ _(QȜΔCHeqdK*NbxfE>ԘsX-ܽ}1uWǪ's8Ѡ7n^Î"/X7 u<,6D! xJI$`l:R{1zO hr#׎3;1Wy g3z{ ׂU0sAܺWx-^a"FtB œ[4(_.s&3DPr{Xad `uՋ+ ${䭔B=ӭ!̧kJ1:cP^mbL R%x7-F{@w8W7ޫ:toŹ5c px@-ڔͯڰykr-<DZm E6wwTvbOniS\BWy'[1HѶ]3* = 1-c&R [\ۅh'u񋇖^j[U %Ԑl̖K'|T$u.4DHcʀ.hiNTH2$|¾Ob־j-&v8-G%^|5%ޟdԝwyFsZ|z9SQQuf ŴG(lHFEֺjo}75 #=Euy! ӫB~Hm9R?]VX{n=oH"yHRg$td§@cCQxU39\0Yv6h {itIh C4Zt<s EEOяCQрMc2 EE/E%ޢBX~DbrLT $9SP#%[CmәRϵƼ7+`Abo\/tHoF?2x〞Mу0zz=8A<^]%egM9gE #&tI$ ehu1p @.pCoH9~|T }V` CGǖIy8AVr'puYVTHȟ՝߰k:ʷ?GoڬG?}oF I}@}(g/R!Ţ$~7k~%ɱSmʕ`?MT-;iAdA?9ZZ~[! =ϣ;ڣUe.[Ty#F𙾨K\8|`1uqZo][iXR4) _"JLL㍳^D= !!XбLL&VWo?ؿ*,Uz[3;Л+VS#0W|ks^!OW1GƠ38|-5ª|'mZlcn q(7H XU}b6nR_=&ElmN6-N c|ov5Ӻ]ʹj]MF5&=vh! 3IY4ž;P/w!vRӯ|Xy/bZ}"]8m9V'Wi ÈZ-w%c-܉wɝ@KlE^1lQr2m@x)Fv4EFΧs$ ! 5y*dZE|ؖSCD`T9VR6rJ43[1f%QzjRJ9F%0t(;t(7Ooт*dﳶM,@E'SФ)Aɞ%Q)d] $"jGd&l4D6)7Go.-oI[_l8l◳'g`eNBI "@Bg,o:J[&Ȉ5ʣ4EDtt"cG:6Z% 92',V%#[^[إ䀍^Uelu@W3v^2i\f69zYȘ5ȰSΣf9ZR 8 U{D *gBPeTmT`rE.^:N]gg1rLfjfĜGi5'1#ҏJ=8UoHjHֳ2< B Ѩ_\^O''gW|#$oNob&Ĭ9˫Uog k SnZ|NtvƏZU 6nqL"ތ#dϜv3α!\%*a?8lt?HěLv=Z;_=b`!AHkR~ZWo+]ঢ়u]e8!A;oNU4 /x #Y + YVR'i%c`҄]GwQ4Z#:j?U cEB-:#iҮzl Lހٺ}% -p,ۜ(Ւ!l%%@F"IDmZaH``Z*isڮ!XPD9 &p]Uf[)k}[ojܹvlK Wn2VjoVBfmF2@ Yʖ݄%gָbG2~AOesէX1_\L/Ƒ7nlѺgeӌٴlXQl\Nf#VwKOU]ϟYוNm|oGW^6Wfl)l[j 3n0sW_:djby?_HJ4hGBiKal/>Nub}Rݞ7e;d9\I)`MD3weoGT %raK^~v/ò>@}3;p|Sgqt;@:ZzB^0+/|q\ʟ*kR;z?2te$Qtcj]'4veFE+,2:ѯ-j\1f [̴7&'Øcƶ^ ?^3Jnw|k3j+ѯ1n2s8Cb' ,ƺdc5G6c%שڌيmmS Aj%hOj=؍]ĮARz<~#v7[*̎4ؤn,;fɠz)7 p'"%i evj R>bxeTk^>z˫\sW?u!+iUӪUO مs^ Si %GNj$)IŢZw!;P.}_G˲FCQ!At2`!K-NXijZOֺ) 6XC)+#1m& |cHVV.GH3H%z |VE]IaI*1+&b(,24ҰMKչhZpm6>wG%Gg4fUϞ>P]3R-%ӑ) 5(j=)Þ:Sx{ђ߶FY!`#""Rn䕀u%j>y^t܀uKK@ =EȔ Bl2B53eD## g'M(#IE[˲LSJk-pu.-BhJMJ}f3AOuq?kJԗDK`'4ZTe*Z~be[ Էv3Bz[WOM{u1!v=-KD{Nz}k=5aφlxne~=䍭p ':U+niN:@{cwessdެO 1pޠr< :pFtձ#B_ "a{Y0iLO'6hQPT ,Rb,Uȶ6UeEFFơXKʓnrL:s/7wi\8-*l7ܓQ8paV&NIG鈢V7(f*K׍W.HXZ{;ۉGPh 9"lI bETJpJKuy/P$Fv)j In3(IOfhW_.|aSƴCB_ \Noɸ'*p7 0?4&=[YpϚ ƅ Go?! Z.0*-")ޥG5N! 7 >3nvkgq.[CE\3'6|G8z@hdd~j4H9<^S}+>-EUlpK6')վ=Z Z,J}xV(̅ 8𻧟t2v vr>;m.ɩqVꙉRBNESkh#W8l w&;"M: ڝOnTI/Uݍ-+vѮKz׵K=n,M͈p$Hx5n񬤸̇Ի(xMNBpq,%"s%2Ϩ"%ƢvNy"YuUi\o&*-)7 IIH@Λϰ(b2_ !̽! ]$YRdZGo(Q^rM.sGC pj#4=AnRoƃO6JYogko 1Rɝ6'X8}J,*yŢT渘\ yi4_PY=U$r7.~; ɥn`>C*bNVLuڞg-&u# vS !xChn$Xr_rN%D{I5sU-VhB'\Bύ*= a`6z>^_=,,DyDG@99@53{ )<@P[=' G;fI! nS`jib?G z W~z??lA5I]g9|unK>1Z<./XwIvPhEXQB\qÑ N=>e~~~sKZCpKZo )t?Jiס͐Ve{tW| Q~u]mO[z9V<k%\㷸ju#4c4 ;;\]fc|cY* /Ͽ\͞[,0ŗ 4ƿūNo3sɓ\_ˊ*hAln,U`߽!__zw~5gm焙}|kp?o'?d y#~Al %g`5w"B@%ku:R*VC㴝D=Htgӱk3rM0SVwj'1}P~tbKq+m=^"]\BK!_Fs-V0!+mj˱(D=V q'PTõ[?4ƳvzRy\>3+NL5PNkrM){l^]ueo8][fURlUv~qKAw7}03+p-) #brQ: 䠏(Pȵ$E w @wŷ "ɧr^tLy) Q ǒHI}&<n1FuS?U*zn+/PQ= 6fnxiΖO\V0'W>q6i`ښ{hxr?ɏqb6=;=LGϴ2"瓳A`oO2Q>zW)ի>ICwkr͒[;{_yw䍐D2ta0PÇBZ-0xJRpkkX0s5+\B.HR!kMWcۄWqWܰb~yk<*f Š:x "?W.lN=*]CY `ހb$DsJﹰG-g8QSF@4TR]8NPu&ufU12R1##\FoVQ5@$Ƅ*g8q Q# Xjh[blFqx=}n_]F"@rjhij3ٰ \j ʕΡ\  ymj\)/4g;˯NaBBU ̩]''T B]╡%h͖Z@L53kV2.1&[܍Ƭ03P`@D1eLF,i GftzqWwD2lIVKʳ&<͆-NJ{vZ_?%ɡ c㟐HX/xƃ%5gph6A{9ؖEL?-xv$vIے*`Grr4:&G&1 Wq'ϒɃh~\T[(f{m|d칡ql \`*7~A [U߈#IJFnL|T)eb)T9)!:"JJKh.d$C eV!UǶ>=D/Bx[֘iCHJr)F൓1 Zr[lt|aM %DG,5u!zJKBvp2?⮝yO/otPj„q yFW 7timW\~8ޕ[$iRaDo_PC&NSc}06cUJnz !b[]\~e˭"ry7O=x\C&7Egݿmr4]wa[פ'QaBٮމvjD}ݍ@SK+ڱa(F0JQΰhVqBjojlZu:Iu0sutqB^阒 @mTrJX܈He0kJ _OxEm6R kHfJ $1EHIrvh ȉD!ս:P)e% H JSyJm8_|J{ \GkW'-D%d Q|Yϥ.JnIEe{NQ=inSփ&&[#aof|ufV)I A(Cf B@"Q&Q-EHgK-2FU׉5 q!umC#y Vtn{) irv0_xrN=TE]G/3.=z3p=WNat7;׮=z|u=/&-Θ3|tsy{^ ❁ ]ޖ@:l!ҵh]o-݊pOAp(Pk/F0J#VF:ۋKLEjHldA'%Us't6@#E)H!GfeD`d=pJP\KZlE`߄n^:}B|ȑ0!5J|CԩL)N$55҂o _?~;8:߮AĘc鞚٪/L!C(.;Jɸ.\VB*^k=tҀXMzzAɆ&g.ϲZC:7^ҷU*^-}?D4GWz ;o8uT~_ߢVm>"{Q=G|3Zǟ؀JގcׂryK^\ﻪl_s>9}eS1a,]w4:W)֪̝8cZ=M̯seP'G:7Na7LDg<[H5䢎`!@O_Ƴ5x̿ׯUg( _7h8G2f":ʕHmQBU?aǝD~d+A)wvpMg t$A3* LQ"5HU~B^9Mª]PvPq/!);RS #NRc68Rx&H@6Sjc2MAC2MU]&lK j!+ESF{)a%yrs뾶"8ayӾ z%U)Y#z'ёPILGR'w2]-gZ]' OhjM&y܎hÙ|#gÝ)5ϓ>/ȌO $d YȅiMd- MTUCW9)ۨWLgʚ8_Ae`݇"<1+=yQ'I$hqoV Q>@P(llTW嗕We)Yg`' ^ww0&f\ 32tbs[r*LZ*kEu.qV̒;Xz2f~}lJT!P0x_`LK*o%DA%3pV(%Y`pNSΏ;+(-h 0"({ޱ >^Cj_ZpJm_>a}$p[R`xHJ $2`"b;6!ڧWy]Zkܧg3쏛!f^'d&Ga1ЯWڥGv2W H h:2"dD!%gQl?_~ibDpsj$tB~sTLTSƺ}-o Fet9]y+^Oo0e0?-)b$g~>$a[sMp0kT }+ Sp*n]ܹ:X?@yUL`Rޔd3P\:%eg Y_n(مp:Rv\yDl]q;t9HX]G*JrF}gn\l_wAmgͅm{R@()~ٱr,:Jp<Ge% gʑ(0ECbjJ^gNJsSwA֕#kיAeg\[aՕYP[p|km!}k*%L/U/7؞1f,M-*t xf%O|4+C꣒}.~[0P)߄c9je+/7dL^@HNaz{3O<.PԐ0%lfC :C*woatjl;X;\vfX`+}0v\%(AmY5<>k`ܪnں1^q!%SR`5N,H+Q#l k͵ G,#i*O>]^*k)g| l>2 u%AFBjޱ1ypO_Q%I{)2@+do˔>++Ԙ`S?RT& Ʌ(FBj& )c%_-񚝲6_+1P,|m( _?4}f3V01LB-t)㜬vC$n!96 g51Hy鹭NT~h?z:z^\FҊS 8:Ia]l^ۿ}HKv4( |^ll \#H)fa/ qy(EG)jAi;eB}-p&rNHWCϠ?KYu;)' G+69=sp kquvr9{4CVK˴TQ8Ȗq8WT> j¸# ΆE $4AŒ2jL_ .뀥izvAҽ=eeBJiSGWG([-jQ!.4*#)zO ss 'AOR+(c526} 1e_"X *Rq@QUZQ:4FCcT=4Ehl,MlFLRŴLezDl0 eHj-P@iZkݑHr(\#iu%)"R~P*8:1`"*Ed@Y8ܔv,9`.FD%zX`CA/Ga6t6}a`(x21"aQnnFЬKm*/Q̺#^bL9l6f3%^_],z(5؇‘ ӟ~l@mOtVﮯ:!R? P> %|\z-?h3ѓNXs@GbZ&mjBcgFD [:LdcHJVmAAqS -w(sGIm_JmrlӃU#h _(I?FyKʙQ(f|z~sˑIm3[sɇ пYbx [P [_̮3Yۇ;x>Mp0b( R(A/1WT1&(Sh<.hAS2o o܆ѓQGdMIM{ Bq39Jf|ޣ9/ S) A\x"޳&GI# lt37-jD}!⇍u=plCIوQ:DKa" 4 jU}L("C!峟k`grr}n'imo&~Ojkw3BtJ;RHjPcs p;ЬqSaegp;isR *2c$ JIӈ$w=r!E@ۍ9kՍ;C1D eC6p@w.o`a~KRk=\(GLz%c5&ʙ` 6"r9ʤׂBýVk=[/ ># 1`=5҂"`ZWKYI7! K@]:0jBlR avܵe0JB228Z]{ˬhcd1&LKrD2 V):}f 1  9f7 ƅ `j``;TJ#{aQp3'we(Z7f~ pш8ŏ(eT$b?UIsUK)q:,e]e J[v& &R_E}1 ~xSI­M\MO¬(|O6'|0Nx~~f̍$52p0s^._}@6wo.e.>B0 !P<.Hqhg("Y Y fy%[!:#}iBֈGcR@L44QPa$uV@LBuG3 ʇ(>ÌuO1SaFU. EaFj8b;.aL(aF:If\PT3&(q]6 l@U/ȟikL(B\|JL;w" [gExH-X笨<5܎x D:)pJ(yʍ+i,Fa$.5YqE]c2D2|6ԟ.DҬ2/v @ԕ:Xh`y'ZDpwx~@gީz>]p{w,u"mϲܜi^}-~y3iLBRxVOB%3e,i)1RaL$#r3f@1D(0'#G8Pߏ相ђ>l-IVaT.0z (Q73\ݖ^-b,$e}#P?(/a)+nHTs}tű L?WQ@r/g(TfM]6O+ՊWdqp˸p參w)0?$wG<OlJ3-v/‡hOUxui0|۫z" `sU`62_ E/T.9WSa*C* 4$e8Bw~ U=o]~x{q~ zqR7] Fwrp"j-Y-w="HQRw|R _%hTj=vB DVJ0f2[iB2 .R&{  7yGfM6]O̢ki{#J+SzxUar *lj#EPbu$ӈؚTWHrԭrf:K%EfuO[W(ǃ]!H接'E@`lyHIt<~QA`}L^JFbC&]5^9CT Ƿʫף6Ψ=>™"_Q{/Ԥxi 7&w%"[uwYMr;GtF%_W0Me -K/ Y9OI_0B M\k3-WٻHn$W z g@ݻX`^bO[jp;Jʺ$Vn*I~)T 5tbKWjMNxa{; zdr޽ir`Nmytrn>#7zQ(ɹ}5FQrl E WޛF\K|cXBrnt@[!D+)ЭXG\e0c"g~T8 WF7.{9 =Y g Jp`!jvm}:5bF'UP" B)ǘh,HiMUZЃDTwFIYfE0\U2S"!S:zE,z3Mg:"Up{VUYKم$%l@ڗNwZe`Vʨb68:ӟ$ߢBfѫ"?^H0n{c7*k)\4΄@=im),щ$O,iq† ' /4XҼfiVf r^^K%Ls-UY}@8^7UPf\}\=<<b&+v΃,:exR𮣐ԜTxEF"Kvpr5oy4H ӥ]U<<=9OcE2͘3H[pҵ<=ӿcrR^ }ҲReύf˗p]x^]AJ1:ZoynmKҫ\ɑK?.]yeCcF" 6.j|Z_AK^wЭԋCYzo!e?}2e_.>Է@a¿UcB=orzߛ7 }a)PRk#Ǻ [qzX]ٟjZR,Dho!%DZ] >|Qzz5|徼2g91xzbĨsnb 7wʕ l{Ў/SFЛ؋Y[> C`9Kdjsϣ&\0ϐo&>cy[O5Shܫ 1a /*3㫈1$5H$RZMq!!*~ڄcӮ4v7"Fq'fRRJi \$4ǦivcQKYqpcJƄxu-mD խ=bV5e45X@&kG_D[fɶ6o+ׯ?2J-Z*m dB?^Qr-*RAZ99&A'9 ة(_fy{9<&*vؑ*:V 4CkWQ;<"J" N mNZT^fD}aV{Rx͓{F6#4ObpxqZ#/I1#Pb k[1V0;7GF={aWw~zOr5ۤTV$׹!˴9u?/?/vhqpsq=ExR++p\.FnL%Y֥,L)3yze5muSt'dì[Sdߔ"Zyw~J~iv{d37,,Ia塞>/"۲:D[/xl\ޞ5o?weA܁B>r{GKj@BԒ CN:kT) ,!!K{vOVR gKw{,pfa,KAZR' !f׏T4TYֶ(,հׇӋrz%[ ogJMF3kl;?yf(.F,Q~Έ z˝])r˙oaDS*0QJma£jՕIP\eDQ 5Z I^U+@& =,*ɤi٠ fr~]>tiw1:&3 ƀrMȒV K!J鸜90?623Y̘|uyivkit;Y0Vusd#q#Ks)%Z,󵁗't+BuV3D7|R'ʒ\s[0Y co^y|:=^ҊN|4llXojU;hec?FMn嵋f`=aŒ.985j5H&QeKXK*~,^WQ`;JMK4ðG:M_k)8W09?J99hKr/1IAθDŽVcuLّ>K ?ѕuQ<<,&+u6=<bc&'` |\I}:Y"D"tJ3l 'h`)SgUFA6i\_rUt٬nwa ZDr*~I͍'fm",5 qhQ+jFӳyg\;[̚vpJ"G+rBµ)W [04hj/Mf_faF+YȵYzzˎ"zt4x )l+hur_ FZOV(Vcz5_S*]w A-\t)Mְp] K*i5dLǻЇiՂyfJm@}Nm4) 906K S HPԭn#f?WA7P 03}3H>9|k0ɩ9Č& .σfK$*TR=4Lj;BU2tXE dX`7SC ]l=_YHK~Oi䍆9Y;$୏ss}L(ֲ^0lZ @72}WUriv[la+u_J1^;25tHڡYڕco<톱:o4lN[vwutQO5"CF91@.M-3=VyQ3J>@)J'gJV}e${6aP#q|UйDbS@+0Q}D#J(ٍd%j{vd4IkmdB>yvV,E~Ri)NFZ@ ͕ޣ=Ѡ0\~)`y::3OVGcwWLBIIs=O2`ݲ+ej-=/y1uϋ{6u=)e.zƒ'NVE9hyΆʢg>Xݓz73% qm/1jǧE[_[uBp)Fpc-U`I!8y(u;KCz1`Q@~iE`'˳Y%| [j-;zae^Xp$S F҃d;$㇡&I}F y, c!F*ҩ4Fe* Qdp&_AcK,';=ѴC@˼a86j_ۚ 55cG5,0V2dl@"}:5|$c*!L}3"V}8וVrk5q ᑑFw:1a3IF:lI'0Hқ2'EStyИsCbR].- !H]Aj";Ezc"x ,È[A%f@0͏3N)[BFf[|.q9xB N2eoѴaXQiݭ#ci5zoӓ;:?}=fYgrdt7 OnXEWFXt9>Ez[[WJ~$4ߖb*qS ds֑tv &h@L܆&q=vZȖĂ.Bc"OaϏ|.|<@JVCDFYZYd7:\ HF C87HB}}N3 03[ `Gxm}rڰ J|!&F*M #!Ĭ쇙G4(A!$g\XBpl* ,d0JI1'fSl zXYbITRwye#{TOLi'1Y`F L>T*r Q͈acg@M, s &c1 e!*^h\ǜadOiAeIMxf`@RL^ J'J qFIh֐gI"#׻Tlen>{ǪɤWZk"6yY.& +$Ma* ^΋aZ俧zHI!)p.tآuuuuW 刨I\^7cu )H @  m-pp6VҗZIx&p$J㵨Kԕ"j} j[[.G&?#2u{OSЏk O wZP)!g򐀠b<2%vH̘aBrOށ\$L zIi_ jz0-K,-?/zK=59J| )]9u:*z_`,ϽD&:w}֜B&5^Yev9/F#r H$WLfD^vרP4X.-2 .JArRA1\ 8KYmu;mE@~(Z*t!0X,xMz3a{5(/ԇX]Uc3?, Sf)ݻaǥ|HHf)s"ӔGQ@9: Dm)FbԂ}9ΌowD xNw3ҙ2Sh}1hmbO:"5R MpJ@ шbu#:s/h-Mu|LFm Hι--.Jh:5 0Z]T>@Tu po)d]ܬ]$DUم~g' L#x4(->.htY/ϛV Zs8'uс 4"Y@im'ccxʩk·% Nx]_ X7Y"_ϋ\C%O5J}ϖ3-qMQ1%1Ŗwg˟e-ef|$$grIG#C 6un0 4@ɼ<,j`ߨ˻I$S=?{=K6zl,YFWiL`6%Ae xHtDWG+6:n㋺ w| d*j"se%+:ʱh!ICޕn툗]w!󎝦EzUy~;6j) M\rBxofӊ >;?F:"ʩRo&4 cQq0kIZ a|<4Xȼ#jJdK<\(hܳXx!EB3 g( Gd@l&ک(oC4nefkVP (`Llve~!KM[g,݊CZjC=U;(u(_ZJGW ;aniFbpJpé b|rASūɺ{0(0De ,]TXF;=q: `Ce+I2sFPKaVskD Jcĥi<F-%4;ܕz{n;HB)6j%u; I.^(#.hmp]1i"%DGPȞ 8-12:C\%(;n-m6c{AoU _KiV\+4hn.nB`* <r* v& Ldq w mU:[4`cHa{5W7Y%)=l0zMRuSH/voa~Rˮ\c{ti*>߲M͔Sz-MOm -8@s/T>KLB{T.u\Px4L̇r"׵j[2/`"r׫ٗ\Jۻy-Mp %=:d3w\?߮﮿z:pyw~q| ', cyy-Z[}VmZLS}Jӑ}`믱cle{kI+?_aR`3?jru@dBPq⧓: Y3J94,cپ;g[@od4h) *þԀngJ§]T JVъP&S|߉VrޕB@1ΨXS- WՊ1QOK=|2#?YDLb)yzޞnܢ/-:/iƠc\l{ m~,N4/n{ꃮXPzmDS%V(;i/N.h.U}UU[gaa=aC212 aSL,ΕWx^oZYMW @4t\֪F61[JMO3"H%6*o)x=XM"I ^'Lׯ:KijeL%°}wָ~(g8٣QRT$.hJaRH06"`D:Kaj< |hAO+-5粚=B(wfn"r<+ &Q(TI`fOHla[^ mKX/o\'4VJ@xl|ŷGWf2zl]rUOB뎯}-̆*ND&t𝣆E Y80<ҁBX,|%gӾ/PϦamvФ 0JlpEG]6˼0・lC{,..慲QK뵶/W_ lY0aja 2-s^N0ČA a CF{7vӴ/Tp'ƞ(tڐm`{~Mhe1d:h`l_X@!J{޶|0&8 |?b(T1-/$UV1G[s(xnԅPHGmM)M8!ቩ׹K`o\LPC7ƛAsVUK,rq5p["ݧpp0Q/ _btyUb~:n MW|7_|sQox_e$far}7=+FM?|oޜRGhe颢23r2w7oTu3WL nSMe|5kBf6QƇ{)^ *7)yIwrD͜u!HpW9>[HyΘzL'xe9̷XL2Ϭ*:&M`#S o*\Kx,R{4Ù5PƠu>jwZީ*ך |ephGeB`U?H_'eN֍K2Me8:z%yTzD1ܓ>l@:M,gg\]GT?C(B1!NAR;„#wqfJ&i7R@'RZ FYN% Ĩ#0l!BIR=@EQ#ݦWR{8C]e9)vՂ\BJwa|5tlXՅ"b譊e6ľoIƯqZL?2 *Q]% ,i>҅ j[-TTJ,pZعb߮KN+l΃D^ ! -:3ܗ=JwecM- 6v= #I&u_U5Vw:%,Խ z4"?~d0.Fa=LۣIKǵ@풵v,JW5Pǐk&c,^FYOT mFʕsq+`.d>\PD ˨B>y>h2q pW6w|)sP[(5)cP{:IjbG9-YFJ%}I&4{?+ 9b~VYa_BNJ5H) Uܰܬ\Gdi\E9ܬ{sש(ș񤬰|'2J1M˴d}rI hч/{}P).ߝLKf1yXE/3Ԓ5>k'x2!1kX2َr06,Я4n(@7ku 3.qƣv D48ߦg|er|V9|9S6_`خz bJبB[H !y )17j\&PϬ:yar]ϘAP頱 F&뜂.mc Mqυ⸾UВ%Fe.ZhK0<7Fd0^J,0F:Q23wa{IS|ό6hVBGwdǻDf}AxW/Ev6Z/Ҿ~K-)*)}! =w*k? g%~=}ޟ.VyWKE8-ndGb:Yw{wQ?E+iЕznZD~8 4?H/t/~zTZB"H>IdbP-iGwJ"(W=7?"[/'P:+>eT6d3%M#DWn,BUVRmdM4Ml#L ݣmviY4?bϸ @!8tSN=R3.IfZ̋Nh[ 5ET"Ywf# Å4ݜL֌񔢛lݼ[us f-;/>*|xm'\f%?i5E>ܦD@_|]Y|;QT*_a}v,֨;?a fJ]+]9 f++;WVK 3)]YVH|JWU>2+ojM(T|721O)siDiY(gm&*?ޚN83*yNxycSd5 p\;;l8E76)r4D5W<봓 Q lAΟq&0j nanpD5fu_`taf1b*-*kz; ow;L'6IZN/E&"6HJ6@F QKWcPrlVx1 e‹14lg\mM@Ph6jKs94۾A]FDe {F{He寧S?_n .=@_~% H]8 gdTjmFJ7+ѹBd4&XU5WWrx2iQZЍo8ߧ6 hȥK?n'2 컳|H:UQ5Dvu+% i+b]UIppڝHa_QO7g+,!@z+57]W dz'*#q.þ[kC/= 뢳/ЈݦCL1N+oQ8/[4FR^?O珯'uTJK!U/xOr!Mmb-Ț1PMCF+_^qp[UNNmg jnIg1(KV@$1\ǫw޺Od4rΑ\9S\;U'K)qs:Dm"#ƣJHaȚ1ȝq͙M-DXp5`kJ @ ݡifsjC30Bqck$Y ը@l &6ol` MMGa}  LTZY01[4)m{1'!%7rZ%tR *r5.@ģ!۝ ŋVV>6^&[ϱcim[X%7VyP 뻛r N,t@yyY^I-huVudG<_.߯3N(2;ɄֵM.DNߛST Z;@Jx;`ly+ pO_C{jx(i(h *zOn]NpB)3$C̓uA.ɚ1N3"{)3SӠoXF]b12)y+.hX9hŻ)6wBL9:8JG!-%9>%QFbCuz:ؐrk^iwvM n$QbQUQ%6۝ΝCC*;SmHoS5F-O'kuXk( gEg8Ƌ}1Q[6;`ɶ=><*C?ڷ Wk{+x}D c)3~~C;RekL/29Sx(0xP'n:[83ٔ1zj=j0vqô)B!pw Pe)vN`za;-Tt܋?iaB fwkuhx*`Zi.n]/n p:.sm zwcϺU&j b$(6cXg[g;aFHp)dX @#~-J#pu[@`|RJp*U6wU݀mi"#oe׊țۻ2F7`ȱ;͇ 4F9 $!GqSKae ʻX,

F)wZ=(1{v=(C.nhBS^8*nU`*83|Y]{WaEMD'ctXSVóHR5470T\<Ш0QU4GETh?Sa#J"a>p%hmZ$ kJM@i(yMr1EDMpyVB0 ]8TA"y8m8k>a *$>9*!@-DV&+bJ?,&4xBGIG 㖀rɅ H֌:"9DzWXszo./'+8xyNqb(XK۬1蝧 +dLb.DU[i`ψQnIUca(v.c`:g1#jvb9 18(2n@F};Oz {gy19!u2RVinbMG+N]V8U!z@\`Jy0oڙm>4)$ThN0rhl8,J )DGn)@q^wT_ 3TI=aEfu dL/t;gCgCՂ?y Vwn-L#w>j>V<MSئhaNoT36 i%!zK֌D7eǧ0Br̄D6&Po( cۿCW }Qz(:6u݊]TF,I`c 6Ts>K9ʆ1Ey$vlҠT$OǸ2K͙o﵃׶Ѻ:6)ZS䉖߆).N^F0 eQ\Tpxc !er( |Dax= ]z82DCE$9&=Z ;2qH5"B-d.-Q!:#ȉzR|MNQE!]x3Y1#v^DWNӅ?+vi Uz݉}UR[ ,YIT[q˂_E^Kn!H_3,WUa]û(Vd`kER)u("7ZZhd-(k!0CZMx-bA uwh2Oe2ԕyjh #J79sJl]}R Də2R0Kr.6Z o;? 3%wk×˙g˰qGW6@;VJk9Fjua9X ŏˣz \ď <?)D,BRz"^S7j⒊=W\xӮ(J+ʹ`(JŀzRKZuuzr'Wš zRrJDݩOO ]JL?_.tDꗃcHDkdkDӂAZp7Lsb3K NT3ͮ[ "w[ NҴo͝`'֜IT&M4e)זjjv,.RXrO>ԗә\.r Ư=g[:5nle}ϛd2)~< R_.g<H.YB+o,MHRa_9EQrAYMV#FXu' @1&LI-59-םli/ gN^ƫ!LiHR4F9δL߅{$z/>}qKčOWp:YeWO-L`Ded>ѝ 1`Ea*-c6_6G?L ݸeGGp d!Om;.Rs[.a` 8i;loc%҉)m?9^w;;mzfߦwB no33{7w;ׯtϐ9ng۝:o~9|׷o?}&(o0Ы/:{w?}s/L}k/uVuǫk;>iρ{Z:ox|˖v~6W7?σY~'J"I ]7ótg pG C+g\:̋~Nq1L~<]8n9<}_&0;k'a3}׌? Sv㢿;<E JOl c1wsMvJ{.G k8ه/Mb!sk;كD˟`evi QWr`4no#vz.˯'L^x>x?p`4˽ӯ~zl?Ehn_T88p4y'M'pun8H?!8|Bcwx[֊{Wܙx'XkYoq2%z5گ|$K` ]8q~=:0|iۿ޼;잂1,u^w'_aI{;HћAo0B҉_M30't|3r7(]/v`# x~n)ff}N{m]hLk|K& #8Պ;!i+89>Ox". .03Ϟ?'iȰσzm8G= KlIi-ӕ6*~x0f@ 6\ob.I2ܳ}*`oLǹ;kX֘_TaN#Oy=NRpW1Қ?qZ0ղ;LS-Zvb!IPǠ0%iyaAEd\*i gױXuJcDA9ubΒzWGGdMZ ;a ׈Fy$&(EN='.=6dL>'g2=83838s&L}+Y󃸒DTSq%~H F2H>}=0Ёog< EC1FF{ 0Az'sp19yƸnƸnƸ^eb-IJf4eW](Q!?$S}Kl_hy TdDf,(l44N ѱs[(hX@ん"H;$m4,f+rB"`$R ୈ2J-sPE*chF K!'N$BBh ga4r^/aE:pR{ Q9 `/ta@(A i rv"Y-{e7|u2~$q%,Bɘ2VLy-BLq 6 (tLEZIHqp0\ꣲ$h]f#Lb^2F*E)Pc ):A#㈒`I-V2*@eA:c*wG5et+.ЙpU:ELM tAcKL">A 0"k.Y)hsx#{n6d(gICuMoea~)/낷ZJ/3k^R$gW2~Ƹ.")",V ^[qI\`]8DJ nP[1HgE]$ީRe$P31zgL)͠%sSF`G̗ @R5xE> #<> \AT!-_MX-ܻTy4sdyp-֙-w!M/q5,ĭFR᱖0Z0p9;"A`T/y^%XDJ'"3Kufc32m,81fM bo]1ـ XZ:25&)5VBWBʘM ^ (*`$!pI< baNȰT5 `YRƃ59P13 Z ]Ր[z` AP+Z@ {axdi0#ZƕT Pp)*$M)L@Lqqu3Ԕ.d&] J~Ϸ6Ўa J #P*)XT^?)]JrTPMoOtEt)Y?ߡKI9v'@ڄQ5 m ۔ eD֒ZnB vQD=:J?C7I kez^L)GzON?_K,wmiĭ}]{_{x6>CcUʨ5*\AssGSLE(upNc^(VFhY]xjZe Gc-R B-PK-ԲB-spZKZU;Uw@J'7n= s Sَ :Wj&|{|6Z_c@J|HwÍ`Y&g|+!lN9.G[Th1rɊ"QŹ NJDQJr+V]Qk*vFƝ<3\EnS :pW@kO*(n s|vt䍄.z]P9+{i@C|LM[[\4J䚴SB4FbSٶ69|y*Vl[ٶvo;^#Q\oWz@w@7Gp>㊼hҢnUs: M9F.rTt (yT$W1hXPSBC+Lڊۄ_*VX[bv*ӡC-5B,q 3>=A ƖN[:vSR/G}|2BF`ɓ]9fwk⧿wLfݑ}?!xm͘kOs9 /ʨO`pz1D (a<E+1ۿ4rqޤ{{dp>-dwpy^[$67Oo[%H 3K)F ̡,#hK ;%ן_*$eO!Z8-] jFnո"M\zl7 mxKNʨ.V7X* %7ـ_aBsdA&W8yf!$X( \J~M瑩Ǔ{+9;9 ifƎ6&5M͑k,ls~U%n_ռq!,Ə|nb9DZjr?w~ը` t-Y$$<SvBYkg!P&h'e ޡyɿՄٿmsW_ٿ:aֹ ι1́upsMzm )8WXS'-c( P@*,&-WR? 8w8EILG5;\Y3v(p{, մz>ƶm `4 S㋄rJJwR/>HDqNsvr{Afje)sک;ce*. \q:z1{KK1 K.D,›3<BYc2i)򌙄̉(dH&$cAA+LEي[uwe+V([Qvy({ϝL5"Ftbxx9,Б, LYcJ+Y̑:bZ#t[3>v+@'pp;H/fWٍUazFq/Lx,K:d0B8.tph襮@!%\Jp_^(8vkZfW j jOAP[Bm K\0(S Y C#LZD *d@*rHBy@I5b,Q)!uQ{B$2qi_*+F!>:w)9 >$S&#-c0 wFm-:]ш$\(J )TlMl{d}]l[ٶmeۥVyOsҘlɔcfb uu[ΜMAv)LQm~8o Rh3[Up*V&}I~)n+Vp[v' T'l:h(}q*fD%炜1,A$LQ݂yc.rOj ;j wZ=+oK~jquGqB?gNĬ9N8)LI+H2 _.)uMdϥP2*p PGmƱQ+NE܊ۄaVĭ["nE!2  v 헫뛓m,'QD:4~Ya$vgy;wEI TG Ԏ;珸+2&+v&E6S / %՜dX"D!xPlS4,^$|FX|TlMlNmeʶm+.m﹓^qF1$Wv& ,Z/ԡ3`#GU=X1(Q$νtΫU?9iM0Q^D,ѬBzP(5+bfv@]?AZ ޫvEٌ2ÓxrK}۟w[Uo?_|ϯA1r\ K.)ӵm?Ꞛ F1})91}f`~ʟo5cO9@":9AtD'A"rz=?cO_$cG䟚WZM'bOΝ^G슄"9椏#4 Yr<7[+]R:=)tvZq,yiFUcGNm<{]a<"L& TAGrt$)Lc`- 8Z!Bܔ_WTWjW]ՕTWSQAZ9#,wr,LI* Lq Lxk4XJkYܣSܾŧy?Ϸ`rh2f,[Nɜ,.1 h5I0u-K)I"QY6/ǧ+ V,XYp'=t&X /tH,K'H+Njc / xLf6:ch~R#xM ؔBv%?}|}]^3JqdAdd&n2FtUINz3i} yWoh*BqΑ?Yl/Q4 ٠U« kPfW{%{FQlW3si2&lbth!KMBbd1uR4xCxKKg k긓-ڋreVarjHePDxk53P0dfq8lU hV Rp ,ųHh]iP<Z! '* "W3Os v%cCbɤI: i+)i3aq F"]Ƃ3gKC02yU !=Bn. >QtbP 1fh*% .1k>Oh>]OITL #K9n=@2ɶT>.ISDDASdNM)\K/f$ܩ'<^QԁU QYv;J`c(jxY&K@gd O9tXu KD|Iuqf2*7RkAn4F:2#y gitY,k / 4 P7^>"s ?ML6MYK]-N]R&ˆ-wmuH~A+d, $ɼlX%ْl'[ut|GꋺuD9UՅO Qk HƧ]ALTsCN #sCfH䝁U" =t4ki֮Ӭ]w?k7/K\KB6pjIW"cwĿS{/osv?ŋjWlΉ3E+YȚV1J*_9(!G[yc.kCl4Z]ʐEJL]~T%?Yh{un0Rnb%Vkc b{X+8ʗ\ESE50 gRo[K([bfeɓ'f~v*$Y1^tO訄Eʆ=4~qlfdKJ/dq7߳W2ś#"ŤZ~VˮGŮ乏d?~ŗ몳sܾ% /dYpkY}Mdbo||glw~ / t}7x3,;ާN]tCGo\zl87\o2B "ٙ b +l M㎭&MwGeGIqm 930JL &E4XdP1)81Lz0IaL 0 o0w$y`cy$,X$m`9/~* #qD-2x)?I5,@7<)OC%k:Xb\6uX8J>V탘G0DP2&EH,P I\Ѡc+P-PFqI`.ebgtLv]t`arN&d^>ىf4*ELŖ ^你,A j a`8oeA/Ð ,[ Kic6#rfez *0ch WgLdXf`$\ze )dDuv%=n>.eo rQ7NH(x*~uy- %hPDA=J0JKS~ˎ n*(;4(6'Fd9v')ƒo{'C0'Ф[BҚ$⸰' R5ud}ݡAIwA$TBF=bnHJw*d#zc t.4h;.1'BYn{Rd{MPK* j,#x޸"Vt@}e,VBNQJ{  ]MzB%ku{d:2mg]t1TԻ.}u)gq:;JDPF9A{ڊAsV3؄-jڦ+{PJXGl}$᜸~u Z-5 _`{+gOF,F1 ,(&Ѐ8M6T IuNz^I+߹齮Nuuzk - HȔVd=q2Xt~! ;2u,~Bkv]}U*EXj$k`/ΟKp l+#Đ(yNHخo5zHXfVL,s!"حX". Ԥ 12cƁ 7()1ѳNC:zoϔ6W=rKM͜fe[ '"}*GXT*ӎ-n|5[$gDR|&n4qn(Wb8OL3(y@&Й h=P#pSBh%P߀\k3ژρd+TcId ҁh•YD5\$4氋IEz3b~N\, lN j{/aM?^`JdNC:]>*^O1INb'~ALXDvB}炷gc'w&JISRl 3Y!)CF~l9-;$d.SϋQi$.%w7yp7yㇲLB)鹊K$ V>34.I…cmRM$Z"ƍZq^~.E4?o6=f(Fp)Kr0Ub\#,Y?KWO:ZGoݜ5Yyb$V4lѦ\ b/$MOnܰ21hK~Hh)COřp 9S2(X@azoLJ68K#['S|eU._?:}7/s.‡3/e#z)wACKc1Z)U5A>q׀а}B\Z.ggBdKkhH}qb::=S -:Sl .1XrWx$^,, cޠAb2 v I6|5j",/XڠvKKDz9gu^WL>$e ?E = TF#;% N83faILPࠢ`!,@)\l^;d}q徢Gḅ6u~ ,dP *3)d7 Fd}DnXt&K-dp!.!!΍pKH&KѰ%v݃^Sىdɟ;8W `)r [ό1g d>Q2ՙ i"N<2-D0DIOni DEQQ<8F q2Do;0(cco-cOV g~;I8]Kĭ{ݎ<ϲI !q96Jk筧ƜyʙP+'B1fD). PRb'=ߡeL}/W5bsЌ)[oᒚo894_X sHQjJX_hTUpm1"m{ bOi'$wg\Hb)Df~f !HC5 Z&y:L/-gi{T\ܨ"A&gD >M>0;p/@<4-rw;:5q>Qո&4O8lq9±bE  0/WXJ=O*ro!_d+=ىp~rvy.ujNrW>O/>BzuF'Fe:+/(UY9>^WV(ՏGgu<.j{S,n~užɼ}vS!ӄ_|Bhfx;3@wq~~t/MZhw5'>t݄zb᳜OO.$L=雯Dնe$-pݫ>[2ELƆ?3GEuّsa%>;mObXM*ӽ⮿ՋwF+Ux7Jp^xqߟ{_MU\/ΪFZ (ZmH.]k ,VAs^_:1#[̳lj'ߔAs=BSve}֩#'HZ}>vb寏mܛ$c㛁 }ҽWXVz UZZ܉Kᦆw'HB^=,56#HηaVe,~to`MލmӸCChuCE{4p EZFq[њƞޣq  g-UAj1uhUcƪ.6by:cz#|1J" OMǨqm+M 2ULXsz 4 DЦ5=F D5mG_zϓP+GUDzLvixv%%–YgsoƇg֚'~ϿĤi6_WLW]M'|y|鯧i 5U$(VWf1n6)웍}Sn%rk2asIN9XYQTg:#00\vq|qCϘ'1J=.S*8X&nEBD/DT㖨oLNYp_[q(kAa:y $Ĩ+0rw ia54vj-["6yoVۅ5u6C%xbjƐs7r M3&GN ^cҨàT_!Z^pFD=69rc$wI1# i-8(T @M,e0By- J.qi)[{ \JQ"D۝ĒH /)ZEfQs0ݡ=˞2$m,.i#\SQj {$(B"d!5#,W4W@R iE*~ ֪ P3Zhf(۽J&Ol\`ݧG'}G5/әg:ǕNzPߺ >DGhyi{W<xq;8uNFx:yJC=KѸ]ϰ']K J/!Y<{c(jMy[&4M6iN@Ȕ Ɲx:y[rY Ҧyec(EeN sF(Lϩ4Xc* *kDY6Xm6#PXHιtgvmlnq42a*ʜsg*7iL;P-:_/TI竧  i.ܴrg͜.RE*W}2™BSĭFf֝t"$r؂NOy!B4HFվdp*@KO۬,2fF滩tJtjs0!BuiZ{di36L\GG%UmJ}m˨TH/JF-sW%s\mǫ*JFlS:XL)!hBL2UDjFYQ2uW9V#8GYZ1) T+MhK TrA؟(Y󼤉g>M-ϗTQS_.D)1R_?>-lᎨ˚k1 />VΆ*T};\ %;JrЉ2J38[h=K٪k Zz-Vz˅G /쟨$zKIjaѨL%LsL8-T(S/ EȐQYavI'C!Iʾ:* 1HS2+YXB6֬=lC-2|fBA-p̾([0+Hza< 4m{=I P"Hm !;L BPh(º)+x 0rU.QX)}EFdu6MC0絯ѱW9]aS ʋ[2WL)1QCAXVlL5Phx< xȿ+Fv2fpX{+d/JH.;@&IyB9\o=g3k$~rO_?~_qb҇4/?&wt_QtZ%ܿ) l̟Uիes{=}Y̳Mav~~7޵\"{A=/6M XEXQPlf ř!)M؊H9̹HDAFUN3TJ)(a,LpeyfGQNbE.F8+?cTU'(f}؅Qjɉp\h|fX(,Y/m¬Rz4\+J'E!q~]V3ptZJ'z˞]+߭iJt]:WN .^ΫiB@+alCLw& $qz7! TRI-eH:)J,R'DJq40yDKPbS9M'y발 %kfng"3W2c3 ~7hrh<ݹ Hu#|Jr/L+Ji7˴M}423Q(1_cJxE4'ɍM`kn_s\xGF+}uA.<08ǮV;spK܎k״c4@Q*Ke&+ 5[u oFM2sO{.K>C137R!Y[^e'9e]Ҷ SAVMh"}t^oJ̥ؑnVYؚ C`b;7H[μ *1coa P h0Je[#6fS S$T6h'ƠyLI6X Py܂4!%5͜qD7 3(JI✨Y ֎o@aY6NKRJyI>қi Kk:|IC.:I!M-ڛ #Dj #7;A@-pˑ`hx`5J!! E4ӹd*ba/ꂻ΀&hB&h_wBKG7j;qP"`e%MqƬ5-t~@@ؿOFƑyGԞj`ye\[<HIpc@ly_0%*Rc'o 5-НѝkgJI K0xjP ^&L a`o01 L}5u6C)V$E %gܞew *5 5Þt'VV5 ~ҔA`: }<7 2n yMD0BG=BA$^I [6T>/⣐HdmvƐ;$' {ϑefG c$pVxz(b9ꯈ=R"b6]Aۧc`COl9\m:3V`ߝlA7>-f\WHHvt_&6gȣ:`Zs*w=')j&}J^8$'|ˮt4 {5}C .$5^bZVq^;)?JpmVFYYqM5fIP@Nw?D $'{4j"a';z5<!Pdu)W YV V!gBDžcΚx8Bէ!w %+p r\+a0q 7^V=hӛ ]ɲ ~{6f60$~\.1r6Of-֢XSWO/`1B-?d~,^d.:t_ H7e皉>~fa&j p4s._KȔ?|= vNoIc}6`{>x3N +&xxr:8&+0lﵚf^3^R56t[{&"k&6Um1$um1wI}sf懘"kXxӐЧ7V5*kKAxǙ9g K{RMnpƩ n1{VAxWS*pW7E{҇MMK2! AmUᢃѐwuJÚ!H ku?lg4(hP! RDIov{zz2ldcxlQB-E1^ÚcAt&p|U64*7^`ƽ )z"ڈjBT#.+#[ &N't]TB֐NQ;ܷڜV?Iq$T($e *&R&/gX+܆8р0]oWC ) 2-h }ytA4`[\F@2f6юg \HR'O;I0i(Gr Z^$@8J1U#Ls3ggy"sÕ8eP4lFD\]K`P@_D*g]z  H\tFTr<fDD@Dq8K D`3^rPW$a@BF^ABdXz|Yp cYAKФB‡ͫ\'5mhR 7 ~TaUQcd&7`;^dgv\B8 ; 1WH ҿ8N}A )Ɛ8 /D+}(d8]Teh%-,gG3ָתUG^IL5D?){&ޔrԥܖ Ek}xv~Sqĭ71^K?B<֗Ϗ~ʯv %JZOQ{*Hkc_\8P`!5\9a͒PVJN|@aP\f(Ƭ I2_e}J(m%C*^ SgC-Lww)Y$XA-̱9)(JV r0$@FE`^`ߪl QI׼zG! BĪ#1'Tv [0"=߆@xh$F0J!R"`P2V~bCB[Cp'mGH`?؇LKj9\]HNZSe>#@JUvÞNJ% ʐcpGv.Me #ee&W`_o5xE6y4eA6Js hd .{26_ {,:+Y2ޫz>6tD# d٤P6:(/T#X^ӄR ґk6t5󍄰?_G%ޔkp<͗ ?{[edV4)gs_Xɭ&2W׃hz1 ieLwbN§'΋<*.N7? g'Y ~~f?G4|m{ϟ_>~?}vqz-~~.k~_>W~|_/~~~ϟe} ъO/ivSx;Ξ;?ʋ*6?=z?Q>g!.璳B՟+~N]Ѩ_yͮ/xצj.k}ʛwF>dOXc^i4@hLԬDe M.-)tE)_<| ~m\Ⴌh=@4#>Zd;"jMor*.-oU^aǚ2}y?-2\ٻ޶eW"mý)SqhS\>Mؒ"q;K%[6eJ@c\g3rcpg:z^9/ tE;5pʇOug>F!=@;.}2wG30{w=>=x_ Os0잂,xuw7DϨzދѯo`/>')ih*r#qSnO!ΕR)P4O:KW?s0&o} w7^Iޖl5ª$n<ԀWip@`'n]Ps')ۤ] 4W+qw#09\VX>15YHk&qd*-SSc';MLaLqY6 h&`~%!HHbEWJ*˲ )Ϸ%\/ Bu5> `w {.slg|< #;1! :lWgS`|'O2W/Z䃣noz:\}\Ͼ2MuC_2vRD䒇윀z ohjI}֪ F]s(Ac;0npsvv~tώxسNH,C?UREЪGYQp5ѷ,\k̼ KZ Yّ[uY/& :z!h5&)EbW,%IW1ֳfXoűދׯb2KRMweNJԢ>ƃBe ڙx/\+">"(mƨokwEӠ$`,{B5op؅Gr.WBP MܿZ\? ĘT#1f8߯&M=\32k¢Y2XuF5Fڦ_k8von,4jav̫.|ifd4ܱގ?9y?<[;9NI=8EfbL0Nlww?mvS}Ci琛%u xzo:6OVS7B}ظ<~_+wbOZ A,6v`TB-cb g窝k'>ϑ=y_yBzB˂K[XXY nyau!3ѧWஐpY"­J\;6uõ(L> ~E "esg4tՙְLt}n)o۹j 6\/G8ԨfN+-I8 D_XLt4Kڡ)+Dx|^L+kLͰf.p~pPp (9/]9J}.]VrX~;qB>9 GmZG; "6x{Rr]1 w@;)ZOGmT m<+WYhA%Wytɼ5ʼMO=fN()rF9-2 uD̹-B^ Fk&iDz-X8 UA/ )ޔAoʠ7eЛ2_`Qn7eT>S X_W RHcMΌ pZS͝| Qz" &jͭ1"j1ŇL-=Ec {k&Qp~Iޝ a10;D\:DE:*U[n@ K<@$ПJcjjScoS:7H'AA+GRy/w./>~Xp9wŠL) @1ҜYm4kUX|@NSDwF+L opߊSzL^Օ2-)*,y|B2k/i*5OiPjFJoJoGf.Qc3gqny }XZ#Oω]l^IE0=ڀ 8낕q[".:@N)S<d^~$ɹe-p"} EVXxUtH5l7G/8k7 WRh8f*cTX0(7Zs.I /@󢜂&c/Fj(FXڞWܴgv 9eU-ʘ\ !&UaQ הU!bU(12D@i}Qίy/G@> Vq<_x#u-aR9-'GGaz>̯#\< _yixߺ&:Sq׸j8mTB:Z3;9auwtjAf+G9q y午 " Q!IkcCnt5mjfwbb/ɗGv9ŝ# NJ{;ړ!|?Çq 5Ly}[yRqCR6Bsr_?0Axj}ڐO8!ʖLU[B!=[7_-kv@qߚu {N}ku }enHWNh*WvO7uZ`rl(ׄFJZnb meR)- 3 b5@ٕ p IM!Qhts"Wa<˜8;BYM;&Fa:b D2&tDic"<:j2b"I2!%%6\DJ%-ZAO al;x <W3h7F5W^n('#v2. "6BԎ\10v!#jcWs7Gȳ3f)S&m`ֆ}!BPΨ0aިRRPȱg[@rO<9Ϊ2D" 9 +}#`[)F6QE)bA;)H80*%)!ck Q6^-K^ڤیJʭ1M>%D g!09XfSD۱Nbz,(YoMf̽=pN\SrH@Onh=Kro(Fi !5j$D*B]Ex=7I++KDn9'$J4]:[A htnb0Y|oT vcph\}1TTc qUiDp.9B:D-`pn;U5`}m,Z< Y|;IsziI Ĝ4L)-P6 û2`>VAs/dN3fKfIWXszQ%!u߿}M7w5` <Ƃ ^T7z *9\JM"ُG$z2AQNs0 Ϸ{VCKFVig2V׋**ҕy62"hҭdd4WY\\q3OIkhJ=zgߖőWaT?;tU *D]l4ʒă0p;E>p`[S0݀X'`'ArM(F(2\#޵Fr#_)iU&Ai`w[ݫ~,]J,)SYUm 7lXb2#'Rjj p\tE'2Zx[VmYq# AXc,ܛ>,xK1/Ӵ4ؙlh\B -˵Hľ mK =pѵtuAJum}.©]Ze%KK,MŪ`$ 9ĵjDG=l((zBB0|])sSζS8CK/ /TtLra$$2%W2E'l0Ve {LTc 1-[iBLbZn 77(YY #nόx\1FVv?yp%FcA.A^=} Q|şZ3w߾>Yzݸbw#4~{՟ ւxqg??:^XH;s%y0Vn-Oa3>l`NQ-oxEϬ?Y= CB)o+wp-p,D ~LI[ &Dc K,E/.EՐl;2" Q,<“Ѡ,FێA+ρg^d&CZ|~cS`K8Vdyx۷<ጹzX_ e(fS k`e+EpȌyqggYUʵ^d: K k6ctfAWpOgE'b=jⶫ9KWji}eVoRYu1w;"%Rj0%Jy.@UL6hu tRǵ8 Xh]dK W21W&v ry:s\fs>sy5#\`>1ޓ>i[ rC'yjՎaqeڊ|Jv6s@(Lqqkb(W;scTwР33e42ʐս  lnd ݓdHH)QpQ?D"OP"[CqgOVx L߳E*DAۯ dMy ;xf/,?K4$1Zq XqcCHԆcYq$xޡAD@gG!!AgRfk_b8sXG-FRz W&`bsfY ZKRhirLCC@jހ;0h{\̒BѶлh<%Ih0KpaأVԺh@}{l2Q^LoE.~5hNS| Ib{-Bz䵹 A`AɃ:$Ƞi'=xLY7ܘzd驂wРε:+?Kܢw {,ZC(vU(Gpݩhdw)1VKMDPD$ё:1,7+PGYfqw9MI_?u>گI -#DPrWk cN]&դ Q;*iOj,v\^9u ߺ~Ȟ>̀΃EzFeޗ RvK3vy$lvT^lslReSqo-WcP.]s5)f]mtuaܨi\'%'=74@f`5+o Mvl@SL ӄʊ#ή0_ԋeF1XmoÆqVwt+kNrTj)y r/lg@ӆfao9@Vk6u0R}vn ̍Dϭ8(m˾)vu*XpjKL:D (%&:Ԅ959s;@_N|gxw- cm5D 5 Ww$eYOكluk5#}Cw˪k@X-8ZD^džuA-ZɌ9]5 d bOVvwרXjvR&^Y4RN5qȻ4v7y{wp@Y?_Լ,{-cw^ӆ Ժ-Dv̔[^pPLEydjq:Łx4 YuKҏNۍuڛW$O0eלNNIfbZ>;9}m_Jg#b ðpN[طBIOѨB3;?CbMܗ}MLzo %{,#RY9 MvdՀa={wd1L nѦ slݚmoxޘEk:˫ \ks<΢}={Zw`y[ p<}^/[kw/&|} 1Z)}iyb@-IjF\A3e¢ =/)=wޟ]輦9fPגZ ׷_^^!%׆%NcI1%|/'J#_\?v~V.}KAmU& hYk\Zyfe׼N[~&}Ӧ0Um#i)J6|/~IO-MMa܎#q1(y-n}K mKp4a[S?̷'dK0([ߝ'\SDŽ,z/mи!B3@m!K (w 9mнo'SQ oQ#vDqx~ֿMODy8ZOqekO|w:x7n?!5n\~aqq#\'3 q 'K^mOfE[ϴ=k'ٻFn$ ̧$; LJ`el'ɓL߯(veu 2~_UdWjH Si"d qcxB{ao8 l†o灏#&j)-"2 !ܝ~9:nMFU:7 ٦)$wݦMɊ'Kx JvH RkQI^u[>bܗ6^Ս$}U)$.j3F:O'-xM:27Ymu'??yŤvSh\rʐTvIkhŨzEtMwuRy'cg_k0˽; [bn|bxp\oݶNĔǮ&|M뱔j:wm Sֶ.{ò2#1Q`ϽLjW: L-k Jʄ\eEN_:|'Py@]&R -s9٠FW_t*j -&`p6혷 5\wGձM3tZVsnZ+ H(cC^i%PvTtEΝyP"@jnd=2e\X*5Q3@(YZ֚y.IDH l 2 3jS T& "royfKx2S*Zw,mϰ2,nՇj8Sǻr;Qs=J2+JӽRI3/*gݬΆțyzb*mt@{sdNH\_a<`ߥF5͌`ƋOn>>}:łi7}HEq(nM)SQ,ǃQfc(bN\HyE੽72u6֩˕}ϰf^GQ@׽df&Jht6fo1ƃmH.I"NKNLBcR5B^){K_C+V/]zPF)/-~"WQ{*1B#=6kLw\Xv{*j0 02-.Gǐb&DZ=ɺhޚiAa+Ѿo?MBHֱ&坌]qU_|G~k#n۫%!p!ď&V`\tu_h7Ad1:,wJf:"b9^^*S-vC޶viҥJ*.M٬Q@8bILuQ("bjШ֮'dvM'6t7} ^|G!f>훫n|u;ȯC_7|M7Գ5ĴdLS ;zSJ.nGsʘ@y3E<~OCn6(T)V],ZJuxi-h:TE>&HP'tYH&ޤ"՛TzS.R-_VH 'NIiX/ty= !P^ #*U"?a 7 DJ7/%|0B4JȯzF+.zO2ZU2*h)M4>7 44 q3ϙgBBni&hY ,c-YA*5uMУY"s1彃xR_ܪwc 3=ҕRՆ`J;d+ \@@ǂ̝s!'[3(h*Hhj z_'76"-&K: n!l9[PD]w^1 ]zo^,d)/"gȑ9- S92"e,2=i 7İ1eᣇ?#Wϧ G.{u-JݽK"VREJzٍE̋a En6YBxQ3ɑ+Z/pwׁ@2.im[&n Sb=5W'VpsN ϲm!Q L`Yn)[rR;}4e4/)Q:@45)i E<7=˙T 5KJ<&y-sTΤQ^c]Vu=*g,ȭM1c0߳d/+qH s}ZWm):T 5 h \츹 7V-^obPJg>^#XZUFO1V?ۯ1NrTz LJ(]´CtoG 7Wt`?q<}sMm; +Ts}/?U9サ}0>f*oڛ?}O6Шʚ5ߪU?~IOOڮ OLO~kV>vpxyns j>AIV78[v ±a75@sō9W`,Z'E,lȵu*k$Z)ܘ\+`2]-{"vK.|J+bSR{pJȣ>eUQskrq%+#-"EBj̙So)z*,yBD!ՅȘ&`z -6V=J%vsԴa(4I*(Q Ōk 驪U#CM钵+GM8dXy$_o%oܦiͻiM&4_-.M_ %7 M gVd?.|LF䩦a. tE|n{Rd03 O4]xM{Q\~"]nRu⨛GݔU~n4̚FNٞ/o{GOTJ0Lr3=@Go0\Dova=[uL,15o -(|V֩0)4e\d202KaK'/ֳ-c!=p?2׿&I1sv+4/uz& ^D#c][9bZ-VЇ`"U$ _9"(ocWpaIP5Tdx L:23o J1Nzeu :0ՍCj)QԪ诊jTgdwo <:PLyq-XZ/΋GXѓ`ēuHñZДB!?J"jIC赋hV;Y.m\Z tyi WN@oɚ{;+- .$B%LX<ݞiDa pPH&Z0462UW~̽ ,!y ÜT(eU[[.K!Bl#Ń;3UkZں5\p@ia^b1%Vů9"PX#\EF>XLZ]K>ᡜ33h`qƔort`q;\F#p6Ù\Jf]>5;/"X[gbWdKlp*>h؝ƚyB]+VPd0c]2ZAvWniWˮTwN 6:`[ˢȘpJ_ dGա#D>37{`8c~s#Ƃa% .zOpzC&㠘f+|L{= S9b,)|^{R.q:Jpm ګTRY?12< K,ag.3Y> _6Y.Vl/AjG!hZ]EJfUALm܁!_+]fo]AÈ(}FknnG󭮘F 0ң"iB+zH+f-t8!.xdTeV1E*?b+D_r26ixZϚ! J v)PC,W&'Jt-sMJ / Wޛ0ƽٮYM\QBV)UH3d ~3䡰8s-cdSS}*Yk1pJ?{WѤ0_X/k8$35XҌjQ: Ů~J RO)0bu%^t/D}\茍=,\w~.DJovf؃W^¾+Qw~ܪA}}GA;͞B=ǚ$&VRTL fZqА!Ì+U"yh]#iF35 .[mO]yg”:䍕葍U6y8%`^Ih0J  QVD推\ 1 'Uk-&F!AʑNd>>^ooA pڑJI*櫊GvT#WJ/@#xӻ~OjzLWB}](EZ(cH05EGK0+dh՝Ib,>ΫڙS9=$e$8rr@ $q:_nuR\*EuU$WeH#ufk0qʍ"e}|OF" bN҄1(7TJ&W39wS1I2D*TrL23Wc`\vu ^VKn473Wq9m&]IW2,j@iXRX2 90b#wOψ׀>e"*ysT="32SαV7YJ#v3 3E6%W[-3ʹ`4se/]% *Ubw@6Yg=r&^)%'T^nPV!Q;j*ky3=?Pu-#Bkhƛmk\.B%! q$ۻٷ L.I2IDT!g69Ls- +ragY'O9F/H=DR+AЅ A4P,4U #ypiNidߌ34H[LL;?΄U 5x$mSX\t?[R ?g3a h~Q1 2=r PfBCo꿷fze;;cP{s?hί|C9?~:mґGFvx==>~=i*_ui{[E\ܹ\W^a\[%˃\u7I2֥2|aDB4R[pi8jOE*FjCpԜkPUzjnݼ ;/nYڦE7^6Iy8ǜR$gm$c>&MoҸ7ˏR/5Q*ո%u>_KW|/W94m{6Hvr;9]ao;Vm=1xzBy&nz/zۋNZpi5э_W+mUmnKjA Jƕ<]P+ch'ӖSz&nx_\}?E΄`-]jS;4sJ~$wݭdϑO\gb^>p^.I/0.0 m "4Cʄ\?BH1!KX Ӧ!qH/ Sq!p^Ju^\qP ]hʓ/X՘Qd37Vv+\y:2N{5G;{;~6N'kq*,oRi2uocMzWoqtp=rWOB6+0>=+0AAix\@x-?l3m*L]<&GM4ELnܔד2DyN8bs]Z}4eٛx iZQSQ>h B*:eJOF7^G*fG4XP@ĉR Ԇ1(ciV3gfbՊjkGC8 ֢i0&K_b)-qb$g,vXOG "k>֚-y܃13G>sbg mdYBes$DPP`u%lq>/1)P/އs9ccHӷ:]wwɤ5 v JYX0,k(T;UWWTJښͯ[EhiQbOڥkAW/|+lav 9v.%K=6I oČ$eq`!26$"q c%)]denDTچ`&`BPF%茐. B8Š#c܀Y$4 VHF!J0IH $‚Έ+e* O`C9 .2}}U[jC 2ITk@~WdjO2u9LU.O[0ѵ; SecG-b -p˘7uY}ni‹}q4x`*$I9Ř ՂB$gvxO㔯R Ʒ}\#ajqR*kX8Кc,Mw7ŃxP8]7y6l^%.c !UK<g9s"&%lMҬf Ǔϐt\=Eo0HgM\f0/quEd/R1c 琫H'A߅ ׁ`"Vb+'|ÌY3N*(&ءGuf'|7ū1~ jV;|ҋV:VLqH&ZsW ɢn.8o[ +Oo;o{ctH1;]$yT2bjZqͶ"cgD$Ï6J Z ;% ?]Ht Z5Ozk[3y=2m%W ~f'ѢL^`e~:mҰ~z0<~=ŪZTrٴzyFǪEIJDSӶ˰6ι44BQX pPry}\R9GlӢ\q1~ QEړ3<ےKbi~L]q]?"V'>p3&(9K DsmJehhmH\vOQQw:oޚ |E9. IωGDVLJ -cO d$  tU%ēHr`3>X 'o[0~dvoT2 UdYג{щ0LEnKd+0} ¾@59]VDn#HF$fq(c(NX䌱4P؎>!`zvzM~3c""wi1c9]\>%2 ]1MUS*DFB:zfdkPJMzN(!RF?mi+IdCd[vz&V!"I,9`(!q+0t;7= kyY<`۪jtvAnp"֩0?D֜ѭ\P`ҦtWCRpuWor [k;QJT|ZJՒ<$´sσVr3q.96.К;'w7]_HKK?wfvL.r1Xye\.ޜ<ReA緿],=睙|Ktnʌ[ (0ı,0% `/p_4[U 4%:.4,,fH02a,NH,)!cԢgfƱ}J@@Ë,.@vى s0;q^/pz~۝_-z=p.^K# '^/uNųgX [Tdlb8"O,ۉHNv@gvR]qlPƒ9[E,^^{Ltّ"t4XHDCCH!cJD^~897^?_W޽y z8*F{<|̂;h?^|)y< ԞaDçR(XuDxJZzY3h0lx4,Id6f$3"Lc, Cp.` ,h(('$Kq7IW+c0\Yp_i`Q1L7ҳ;~rQ eiㇲP#6DrB$&Am0V RѰHRǠXX,mT31eIL !ƖDcRRzvGONcpѮ-6 blg!TLzB-'p<ęZ6F@wlu61d%(ȫ9U:KpEjF\oSK0j>L"T[G|v=#>_τP.?ZhdceiR^hjkl{dY/4Ӥ1~_l3F! 5i`&p| 7,pGEq" c #Ǎ[Ya^rz}  Ĉ`b+M$?2-5"Ջ4AqU$ʗJxAf#)lXDjX<5˙_\ 2ΫzUY79fCڽ l_C˾ zdƸ+'5-z5`@(GNNv4 (3)FƅH>tyLݥ"|c|g72DI9غm?ԗOW"_(+ |gb}?KMAN8Om%l]QU厸+p,ʘ.@?#^:%-T%'4mQ Ku$HH 1 r AJFJ׸ 蟋yr~Wy*m?t1z?eŴI,⫯}39~~Cx4v?DK- .5WENQjhZDcM"0خ՜˅2QLETL 9ggimo#2oTJɎu͒2̈p0[)X#{\IQ)3A0*SYC3O,c? ׻Z(>J)Z䐨()6 (U(-gw8u"m. ӞiNQgQJZ b,i1L8^ 5B )ڨ\)VMfݾoᾸE|l ZxfU ,X˂J¤Jbp`Ô9_"K!a`]Hހ6d=`-_sghgfgBII. &Iᴤ ö,E!87`cT^"P=.OJ6r!!e"aLjI˸u?/S,cY᳂ѣEɭ FT׏ЄP*f2Ao7AgH3ȞzXs}Z0 h< ;(or?л+3~Șrcs[&bѕ~?@oE_9z vG1hHρ] "lv&cڤMJs`B-_\؀Lno bDtUt5[AL6tR1h}ғkR=KEuk#R^ Sqw(ZY T-zL3ܜT#DJ/Q9d5tZDXw豦=))S)IX g BsjW4{XB N# UB$if,VmGtϑh NP01$s 8RFca XA!8,Z* QT &F 7RPVӂpXds)_R8f JQo,ϵd+j )]R Ih9-@,K/D&ipE z]i-`qITm8q-iSW>8 HX #̜H3l(-R ayI48%N \*^'籀qZo5 WK]!* =*RFRpZGVI<Qjd8x8pƑ)V:> 3QEIհI'Miגp!\^%/A 30oSH)a߼|ST(V/圎xX@ E4*T]W:<*)7F!L% VdVD9#n EE"Xi09tm@zQZ Mz%W`gUccäB0d$h(*Y[Qs4  $DͳaPA*pHxe,accDϟPX +{1)QFy"K el|XONBp U׃{meBEV6 P0h,,N72ؒ jeb3F85OJv5QE!XkF(FdH:jI0QtMk+IlTprBWA{ H-![$j2<*~D_Q~ Ѫdϒ 'z1e!{u>?Oم!,1>s^@W͆B< ;zAi.x͆v:~71U=(=_Ӊ͔̃ಓճǡ϶X<`tttL:gYևRHOJP{ՊⴑU j}ԉ3=[2A1EI)j{҉ qzvG"k# Jy*But"BXfU~)CP :g-]u\> oˉɠY@2gm[ƉszOPo&CJMzN$RhkY(/G&E$U-gq㋿WYfab;}6H_;,G h ιr+kSYZ;^&Q68B4l/yJ1qn{.r@UK9h<G{e=8dtT= QQɕzFixKFKLzʢkɾ?_!}oK'Qgy,՘$(%Z#I¼"+fI|7A%cVt82]܅81ms'Řqb rSl@/eQGG=/-geqCtkdDg7dQр.^Uf6ǤlfSZ+ǻJ9=8dB)fb8D9lAb3@Tj>+y2уd (.3/q,QJueB\rIv:{n>1w=Ȇnh(Q>ny\Ԓ*WbqYQFӳzO - 49{~'2*ޏ{d R>]w֫k8ɞ{*RTaqHzu-G11qPD̢ylH*WjqR?ӲyTR0ϱp).1#"!-b^YuP!(3{]Ir'8Ӹ7i6S*&h73bWy@& m~V'lVeZ-Tu4}lp=FǮ WTWؤwkKs%ע[zX s`4rh$X^\w׃jx(2ѽ̭YXMոVC5xލb[,9lQrZ0]&ć{ᵳDFsB)Ѽ6'~۟V:?VR)WBwo޴!_vPYsc. h}N '}?5WS077+du$3LGKṤkl/]xA/`aG~n,{ P 8A]cpfSyOGO^NkL:{mUjDN^kk\m3O\aLDdF,b+onbvvr}\|ȍ3c}WWӯupRWoXFJ8`lpσc6nW-r@H#ilEiôpJ=1z߲3g1@{\{]>KdHdŶg U#Y/=!YS&{2`_IN&~-[mQn]Gf_brbHm`N0 ͈,JVk%$c,iM<'6czH0[ Q7/J^m؆zCY.`5uw:QNbK kd^;]Q$Iʤ*AW6qt!rdL !Z *W<ϢTF T[B,Uv&GDl㠓ʩ^{2q'FZEgx̓hh.ۿm۰QC;$ִoSC8Y;+4\Rމ[-M,&I8iT: XaX4SG|Y'COia0>V-:ZAEc` 4w\JaDBE>:ji :z=>]r2BQ."nG_[TƔۘ~c*7^MI( W/j0 ?z;8'|qMk p4u+Ӌ3Vӟ!/?\ywϻ|>6HLDz 8Fggӗi `ݡgI 8@_d%sɫP bR7!QrCd!><(jr_X9#DLp:P<5z.BS+Z66 "ol zFRnBSiAmA06w߈}/gܝCr[s)[X~Kιl2yDɗu2yn%oh.>LLvԡ1~OL= +ZM*iS֍[Y֓zеƫI+hb!t+Z[n݉hVLHM?AWzIVmpM#TnCs:wܨڰ ܒ!Llfnܭ| .#캲rYl{0+{w| k*YΙ[n'ul]Bނ;vl57!2B3 S 碇*?W|?*MduƻޫѷFѭxt\ACW R49|`p2['jr$;0'3jÄ]GFw.dtѨ\<-5pldÆ1λtEOp»Ғzee*A `cO|l*~PyCȋ:ov׹ͮ,Re@tv2`x^Qikkm=9eL}Y9G]?rͪ.|l27V*1VH(=BDHG8e!%q fmjլL}nC#8[s?͏,\P#2 'ןi ~>A5"uظM&J-{tyw@Pdq=$é@"X&"/O7a"=Dz4RJIw^ᚂW%nvZHXcf+̪+ gPu~F|2a-X !6mf չ=#5S MBq |('-]DkSڂe(9!~>u s4`*Ɯ+^:oS}먋LjMuޑTėzLDvYa!Ź̝[0ڠ,'ӽu!&9U# v=цJ%UF0t#zjpyU n`4fvZv 1]hAAݔ,b2W!1tKѤU&X58ڸjGJYY)е[AHv'[nGjE$P$Dv7Sv!Cg/чy:["TDxUq'de6zI@׉%>ƍ_}fNԕd S˟:)(%(ܧ@#1~OPOw_T+f={&/_N (, (op.*C1T c܍nN{APHc{]eURΉ$U2>< ŶVa=x6.z>_}_a5Zr"ʶMH'mj _|ьh=7*7Fx|G-D3n"F uYU\S]CAgYQ2u2єhqJӃ[ ?%/O$rB'ڡ^L,|qQy`)21HQ?eC Ppvg8gFSqt$V[Nݷڜt8#Z{r$tʃK$! 5O.gLO(]UݧpnP_竡y>yaL[ ~ RRܡ̺;Zܰ_!}oP5 x2.I>c!Q-et:_@PwUMvZrxVLE"iNS<uU`y6 ~vӞb[E@+VpZ +ZkDJ{9._=c/ijO_*i@jh@GM>( wO):Y/9` C<+* P($]M·/]n~u cT+${&<Ha[@VVv4;EisxmA_1y.%}Epa4Kӕ&RS}^/s>exw5 tMzu `Gu+kb?`~`IJcMY*© A`jcaOͦK[i-)9s⧜urvsJP;ǰHT ؝*&<9.tSmu0 [\>&#؉ \?t3iͱl@8Zu':TA[5xeW Li6 Pjr Hv׵*NK!"epK%5ОB6J}8i zWTB2gHEޠhL&w㋳'SG}Wh1j}+dVH2;':!~#iAo:d9M]6hK.4IR23tfz2 ^+P75 F4j1LiEeR|~ě?_o}/`CӲ{>xkxcI/W^+l/+lo@/v[Q^8gB*A$j3F:&9EԜZ%?Wϛ W, oXlAہ^ L1 (TP;29@Wp~0 ^ G=ﰪgejޚ@aZF.'Q**HS,8?+4EwY7˅\ܘ*(mw 4HN}0APY'Igm9A%s ̷kK˴ m/3FJt "X`9N 0)QPxI:R$1$`$G} (TKkjS Z)rm@ D9DjJcA9Lp1IVTYqlE*jC AS焳#}D{ \t*⁧'~ށ G- H:(Vpˈɱ(m4͖J1-3@N3q9 %Լd5aU1") wဎ9뒧^6T(/32'K֎@1$^IP@E"c5g!ĐU1z_|NjN4 H C-!!.>e>*T_B>OꐸAژ)&.1"zn7=k2+KZ9ircQ` % axDJ& 1R@%cZ&VR}Ġjhǐs`YrH+x !$CqމR`pPxsK #z!$`Ơ6!RpQҎqVqhK1`DXPA2S:l%U[F6\Nh>]ʀr2J nG_OlϮ2N׋^) 3q+gW̯Ə#q !LVV/#go_ p4~Jπӟwxf8~_9ywsgWі`L*s 59L.}T2 TIqY3t}\ /.Z}xAI UP\xWQcs ABt120V?!^6,Pc-af_ 1q}| hgZܶb쇞{V^CnAs I[MD*ZvҜJ~^{MY{Y[or8\qUw) h ]Ypܡ8RTJ*W@)DuT--IDp 5'IIRXRD9IHV5;$IhNᡸ$QVapO вG8T` TYV8x(LIjLu1Pȍ DRt,mF̉i'ݡ>TeYxPեrܰ>`Oi6:e7sj X̼pKt8^eŵM% iJkuktSp,SKHB1 2u54$!XX$R0P1(΍lFp4CR?MMaia,0yo S]ݐ0I&$xIYG6%)R+)KJn>9Q\鐇$Y‘ "#7"I'(kSvjCXGvB ؔrCԾW bSj#ZC05)J%4qSd"B7"H^7$# &.%r1u=5E"5$Hn"6%6u$i&|B6%vQ3đ%bkSr➰>gniVo';fvj[hR'e2[VQxgI۫4[- ) N/,,׉d>ٞ_fh>xVW6nrqkųxlg)P9}5gк=s|?yrg k^rTgO?y9KDA1P#63%>(Bj^rP{ՉX$#W\ \l8}62ڝYSy5oxۉ 6gυe^7E tT^RK8CZ,Tu£@eXЫYϤ£,[\ЫYPp(l¾ݭ;x+TĔtBo;8G%fIsxӛE͞dtQ%i9,c*&'_:sc^}=ؾgp}=H8IZt9UpjNZGjxZT{H^i JEdSyM+Je j_8D "X:+!z;: >ݻ:Әt =GWMVvt󊔨@< ǘƤ|@Uqݻ;Gbg'骏D 쐑  "]n.Ȳl%JMG!(jɍkICIEbCD!QB"ŵ"(bwV'싋F060bkL{v05Q]:8)sz|%Rs֭w4i'9W0$U7KyU:.}jx]nH5+V#] lGN/)mX|ʹ8àI*?V/YhM=A}󃜀}sl|z5 0Z(P9/"qסfXHтG ,ˋ`Wf> Oڇ{qƊSnm%Uۭ V=$ u"xP,ZwewP`5r"7G֝>螣PL;ޛM>?'O`;3J_:\ DȚ h>2C{S;{iº[{c'hK}QܯyM)ӝAY>I=XB~B.ۓrD{7qOHF\\*S7{J(3[9Q`m Dd"(ILI0Tn'nK(kD:-i/}V CrM3!ZmPBc F% eQ"hcb,R "\K5&[ڽӹr.3H[v0'^di DoYCC0&!4I8g6Qx(palj屍HDDcsVk^6l蜦MVR)#^4B"b)"QiÄD h!5$TPZM- n`оhZ:pJM9Y9BIX`zqV:U1ViX$DGLb4v;BBD ldD1eIwXUVJQ5 \!XVQD N1w%6ơQIc "Gj!`J tPs%ڽ9Ϭ~E)jmʸWZC2E!2b"ţDjbˁ"āR(2Qj F3BegRP[ݻZIFpSF/'~>Yћnm8/w٭g^U(w5B"UG'ݤO\Uj t3OY_I%-/|?DHMd &kT7^LCAJE[al{t\M,\۟>*R8K:2Z.wa F[.xpu <3!-S  YHrFHp7#ʌ&1 ego޳yǮA'-=VghYj*;*%@!-TG*eP IB'&"(p \RY1`y` ǡD%X&a")4Ґ%~R;1'Qh%(;6DX,qEr@#z14fwUG6>x<P_KswU]8h ızX;G"z_Y|oǂqPWOg4A*' }YUqo\ere"4Ai%Ylc0tQhF4'$ҝJC7aе_]n5=Ng6Ϻ/o!|ͧ#xw@~|<ߒ ls;^#3;uAj1 ukl-u:26{<|] fva?gE?;ݘ!W?G3T-X& 2!.st u(4߯tt'ӏ ]ͳpO0ה>\:~}/Oޥ_|_zݛ'/ctn߃ F2mO_nܝ槯^vO~{{]s6X~zcS{hd^wl4s&]: ;7"Zn95(}Vf`E7nz5~-W|;$` v{ySt ]S9)MB>17 ll\Y/ `;$rh@v}` 5&˗pu_]zK/_a흝ޥMfB |5[\VhX7Iu"i ̺8קd4n·$a:KA|WEzճЛ3Gr LG?_M=YѻI3 9# Oћx6Wΐ%HNU('-3kG0'ˇqΝ_@A' o$DnlyS'ɓxy x9Hafϯ@-{L '0@ uHB܋.s/Pʢ]>4ǕE>1.YlVb믷9S1!f5WO t6HMq05<ѕzMR `\8aq7ο`Yn Q*j#\OdfQr5_Y"w_l" sx /v }xYk9&ۍ{nsrJ[,l[mE|n,!k~A5pmj~·67T>KMt][s+,=; T!kx㜗dUʌ)J!%%HQCrH`.'{4Fht8~ܛƨ\ۂ( a H6z!ֺN?ܐyfs7z 0geU:<e+ui<|W#%́'9GQ~uǣL}#^˔-SVONcZ(ggn"_;i,=:woO!RsLQ,=,SfGQN'}|XZ[aQ廊H46UҸ8!0fu[L-c5Y9":4ynI[:m᥏gMb!]ɲ#^;rL}41^ؾpwU:Mʍg6JmPˁ.1og hәx}W߼Ǜ=$xV#x5`#{ZkR~mcB~5z,EU͓f'WzM|b=JF~b'ƫs4"X_FDGWB*ǜP1hx(}$6!8nKN%x»*޷TzBy[NYlonӖF~LjEw=׷<3LG[\{?YC_5WV.>a@NTNK`H)j²?~5$hQvHR ZKɡ9o@ Ri (6[4@2vtBOիݶqa*v17w>A55 <ɓzQy& ish.զګHA0/|΅b1Z9]9!z$!mJ'*XF{(a +ٟLGCAPoS5T3DJmOTPhVVԮqUH#'[.TAjj6m.::sB2U}J{tq::P]plW>vM$J-ZtNB%D椤VTudJ㥔4q/]:/:iJ[4 GvoJհS(c)9IDg='@j g7 Z&hB@匃1ڤ.=.Zeۈ1h( "͵^"ƼcD: &P$d*E=%7; zR,mHZ7m7~oKiJXM<~JDH7,IR;%/Pm:G D2έ^DQT@9׫c T<~T$3/[K-G ז,Gqv8 r5?uGX) ]LF]%=[^[ߺR7{֨P2LXI=)M-KHleq)$_.]7)L]`ܡ(Ev iwƖ*-$+ ̍g> xm__ O›HY*y#Jf"˩Sz~a><8Î^l>~S ݑ*ʉ|}WM `B+/d6~uֶ18",+! ccTMΗD[@ib5ݪF+!ZmA=Fi==0l)w|i°Ul];Nr;gOVXð0,UdQΝrz/sitQѢq=`T-ڭ)!Aft9.xosgC-s2\_¨B93JR/apRPT~adAq$+RcCO1Ȇ֚]pNg 5 uЫ>0-pxgtpx g8OV# éa$s֔{1Jw8܃`M,\B{ݾ2xMqo)\QL72#w8K3ۧΐR!n݆m$5!|aLXh՛C]AlҸ~ue7Ǐ1JR@1轅JD O0f aK@i^k5]Wڦ BAP jQZ朇 ꌊN9:B39QDCr+ZJ nqCbNnd:/UjJƘBRE#忞JIPkn*Aoɿ <__bohKӰ[oazvE"|Q.]D(+ |r]`0̇",J/BB+(u1,h&Gy[ߎV-Ydf~Q, x`uŠx7kAiv([貓&|K;pMcJ8E0_S}:``<g ƏO~nBC2[ RZ  :gŎ:  5#'VSx.bY@BcpSڃfiq d875\ 8NE{@dп0iCiᝦzCG )ٹF6n>0$0%1(sXA#Q hM2M^:p,|pm.~ٲi )@U [ V(g꾬CH[MfrT( \X5Xi灏&/@P3ݧ0M;ttyoTXUeJo[oXMؘ H =gǁ'J9#qja

WH0xO3ڥpRnLm>6[Kn.|?,N֭ʿt\FK_U~ԏl_0osƤQp̠0m8卡@eXYD`SC6lŭh6aknq0Y-%&l7^Aa=i*2^G  LBbte-vlXdzb"doMjJ](gyBuR`dP -&(Yˢ1\;LBzϤAP6PNCg5C$K$VZO4TA.bpI'% 0J 0c$` A+̙v9 ,RY4 7H;;\=b[ 9 uP,}W(y=Yf%EZZSCT 2`߿zu(kWjG~-VPK)>Φ&xQFٿUgEcظ10pAśI};?Nph)FT__.nÇ q%/6mڝr.65!m-F}K~N?)]$Mvyג"ML$1j'7 r2~޶tb1@X ]/^3]#u/"GpdH}ؕ|2We8j$n6bJ+.EBI 1*{d˔JnB̢) FAm3Mtje/ۯ>[lFpggkϖ DZMivw=jy-fv|cG*2"z!(2Eˎo/Af|de@GS7W[{q3df1b>qW3JO2.PQ9]?\~cF%;沜WL| 6Dgv~=[8 jd\zȞFpa#ҏjVuO"2R6)rlhܥ$9Mʙ,!Ɛ54VԀ<;=hb/Aʆ͉D nTp1zBT.%B|>ƘR=,~|ifٻU_IzX@ޛ/=9)m;<沾ե9yz>v>?)'2{|- -]>Ft"K8_Ji!tvA=}جcD X69ICc7yD[ {}<;-5l"̂-B՜qNZb 77Ʈ $hv[YQ%hF,eЂ$`62 VJs=m0aBڰC=r;%g›2.v[Kw+Q-Yv $Q|L]Սɖ9ɤhVȒ lJ>OS@ǃ4Hë]ZٵX^ug(C|5`8.dKmsZyD]؛_}rvh=?_RK[A4d .z8LZp Vi WVYO,X/ QJVGqԭ+Cy,5,PVĒqgYf*ɒ`,u]Dg| e+vշ$Y4  ;T--e\ZYMMh~:jzlnI}QS~C[da_Yk道Ƨ:Z[ Ω^2^6DƆg+q2>>Hs i [4oiUoGMHSoivЗV^u%wՆVùOKU72pZxN\lSUu잪@V+-|d9U߁JL[Y6ʔW^MM@W/~3V0`ޘ'u[?qeetdo4/2`H޲XBK.$y#7K'.:j1%]%z:ǣޱ›E/ٟP]LTkމ/ "\p2 }i b^p,~w\t8@@fh<2 h{je4YRS;G,x~AHugDpV}@.h0-lt]LQiǵg5Ob퉷ixw H{Da0.gKUUODg3 rR.D{qc[  r Z0'IsHV@! (X%``D^ÕEIB2Lڈ*sdE2D.#Qb TJ'lԀdZoLxrB*KD$cB 5VQ7D, M`>ZvLN:I`H{DHuK碡!jIG #BD1ʁЍS1in8ޠ73~uY= 1?zw/I#8&{_vP:GC wɁr'!h0+Ӊ՝p?p(zI<~z[`<ŻQ~s6nBeA(Ϣ_~~E' >^vS(;\Lӷ惥ws~ ,ls y۝˿dBQDw;P87l5]ZPكl M%2pJ0u؆ 1pNǂ"!kn*) E pLpn~ H?^6y7sG3]}8 l¡VcB?I~(pn׹:] wްQ^K,x͹% p0̫GS$Vɦ(MiJ/ȿ/:G4K)87}Emur?p;8 8}1>3~Mn:K &d) %z6K6|4J8hY[nPc [)6?O׬AmgvzWۿ+w( /o6 Utًp)vGyy7++o*cl/~a.ߨ/s_AI\ &6B}b^5:aQ&)WSߺ5ckL=8^Π1{{}}!NoF;\n?u=#ܛ*jqu& ~YZ:4bS(<3ĤY 'FUY{@BK'T\pdcR DyVDG<]/<Vۭ (mc4l!+lomWM.5bS. X*=:UƺpF=pgAd0AP!EbAz[04&ƃŋu#}3FRgSec%q&q'֠0ڙ`Xb@(``9>',MxmZ%6oFC:&V$Mdp7 &%0JNjp⣰)G~ $޹ z J4FoS"pUk["cz!1%hJu2IgCF2bd$)2' p"}x*EUAKkюAȬ D"Y "46P H"$5AKMCkC3R5MpH[tnk'QS DHh(8Jћ!DfEV˪ƨU)PEh:I& B QSQg\ZYOL{?~,ql+WAx p43BbL2'PAUE[=G0fWEdtDS G1CF[q+zh*p(.jQ"7$-mDg!Ol#8ǔ3EJ3*6cDw#FCv^v, x͓$ewz2KKg4ITP  RIa/arʇDy:z-\KH"kw yJZX['_+iȊ2+%9^tmԫ%TcC^<3tVgk yqV* w Dp<ciUọTE)s,eG-:jj&=6RTd6:e>2"qH<"K '-]}&C0D̙hhI I^@Y *myf,yD0LN+; &")?c7$("MY8w$fW{,yE9QY_yF6nML#kA`;Ezo@r2EESҘ7 &q,w!H;sy"iyE0ZBTIFx-쮽\k]pv%?ja~0{/(hWR/͟,N7Mޣ; VӺ/@Gbm#xoL$WÈdQrL\#0DXxƭw|Ýnj6;msυ5΂ ^gjrr|>v1vX^b-L-Ac㘽_JBvօaZ~?ťӭ)y ) K }!}ț' a퓉 ä3hEJJ[ Ξ6g׬LSR֦4l 0>`hsQVVɛSVOW.E;`%Ƽwm6?6QARg/T8:Z}}) HU\nP ݕ3  (m`i(O֭hkɖ&L>-C,(zCnѸ a!3ҜTCJzuU|%qD !¿Rcx,"e}yրy$^Tx)0d*.)]̤gIU" y}o&ٮ2}8q t_YnNlJ<})_7}WL<1cw<ǺNWIt}reDaօ_4cZn-zh%l?[7$KR زRԵ6}q4W:5>ng-DBb,,g)( Gh֝bigJ۶_C6E3i禵'v4R"K(;u3 )Aq4]bɃgp{x.mk1HcѮﭸ.ֱ2xFR۟Ϛw;X'o+,&T;RFk5y磟fEԪS*uJt:a8q3!6.*ތ~[..^i37*fۣ/*$G;/?&+C4^F.@ 3O'u=lNܠmĉlp>>`)QMhM3BJ݉1J|0  D2.ACSF{8T[ߤ&49`m8k7i@"~)KϨ"-g@jzϭSӮf'dM1yAjlMZ֒w՝BH+i.تS۴yuV=}mQv͗*CK\#mNz;f cFyjw4BVy䏺qyCƔZq!?'&H8؃WW2+]=p吣 QP]Q-K˖KI˖X5^[jMK.9A+Ǹ*.ǘ(c.b"!TeY6yqsNu:92kmS;/[}8(6YLԯ2//[)!<8,n-*60mxDC,D q9P qrF]%+xPq@ <*….J.᧱@8&tE?'VonW4 .-ܻBcB9gcQBCJĄ K/ZkRpJ\m@ K*\uR`qZO2$Bd; ʫcMe p*){ | ?j>I))3uGrN @.yx8AIlT3ٔ2|B}}z)`UX6``2 *Bs+ [N)KRY jWzf==,@.\ mK?^v^Wf9+/4'1A~׊+DԘ$7jŅ-GYRippZC"E|)gǔu#ez^@ÊVYd‚lV0lcMITv!IE7v N4F$*&4v!5ei̝m Hfs"ѷư\]0^vFo2B*IiLfnH+)ÒVJ1_pc!?_ѷOܱ^{wqۻ79xAw{3nޛ;|y}Ż_/~]zߧz;'{Sw#}{}gF|DE/{nwr=ͼ'Q5 2_e?ǽI}S#gqtL'/n3 ݇&zPsҜj%PhXΧ"26ZJHZdTəRsj_>G􎋺fJ jtuSS4>גwP={)gʴ3˿؃^-SO!:_be(c}.թ0*%ۏߔVOx["K&UE 4TuBV…o߫oN^?_{s3>QwӟCCO}H#Hkσ{m2\LmժD$_zӷ\EZ v4x_5~~̚ۯ~0ei#|ۙ<8=Fkގ5GR9EV8tx-Wp,a]:h3ٟ|f^W ݙ+X⎤DbVGZK3~*/Ќ@0GrmŠ:! od:ڭ25=zf$67{N/~ֈ% fc~ۜ/PK?}<3e=d TK\t%XOTfð"fz8Ay4ah6AI-cyZ]QfH.0@ aw$Aa~@9O]pW~(>|I(!*>t^\\!>  \݆Q 땏?-ד-qy%ŇJ&9Ym2+g}{bw2W~zԃ#.U $:6F- |{('E)ßNKnJKnX5QR<(rnXs4irnJQƶMG n;MZO޵ilф=pRF.nWQF<\lϘRw3vRUtJ,g@[L xRn[;{T89Y MJ QPzQZcfJG+!J*T/_e1x}@@ZuRpvNQapX!*ZMU(ֿ֤ZhV{DhL)%[Sf `^ >^^;&/$˵KeELBg<wSM:ɒm9E987JoE{[>_+Qեغ7\T 59;خ.}+ERD {+AY06'uˈa\e@,tCXǠ r=<$ Qt Wԅa=B(|[ʈ :Be&u0MLB?y?'$\hvf+.\HO4ڇĚF2 W: Hs3f 8t zwFiw2b 0Kڰ |f7uz<^J@_1m2(*C\-} ~(qm!<m(ΩsCBW T3#jȪqJP{J ]%/'Z^՝sPB'5Gin9O]2{Nj3Sү%Rbu3EH΂5&\Ѱ4YwJpʑu4JjΊr(yI)=G8+HrHb.SJ1E,֚i~3gymxeFo$ P&pn DE?BpKsS1Ĭ@Q;)rQw`'uq5B$#]!1㫖n3t1CF#@r Eϰ(ƙ2"$)Z l9\͛<5UIȘ>a$>:`x?cEJI4f23s&Hf|T6DG8Ho p-7} ʐ|i,q +X ګ;2vcG`@{gE MQƘF/Y1%+p؛=(3P<{ڶ݅о&ߟYa &km)%uer]p3+UE&YG5c|-ݻ 5aO}j]w&D6w9gxvMEЁgN93JOΧB3"t" s~H*H }5O<"%,¼G5 $S͛|a"n"Tb# !4dNEITްܛU/p/mA,֭)9/S6mpLd[$ˑ3@kO@I{ Z.펢qGI+thsi={)X}DĮ2Ngw_"  _mcsmEH_X]A˪zуo0Gz /_"sG"ɞdc&ySn_i=a'lwڰtsCwy}2[M^m,jy!fK~.1 Jv Е5Qn=}EQlzms,6ώlVj`YrPjt.pqBs'l*U"_sg)VYceEF>lPFCS$qk؟֩#֌xrd Vsf]E,U__|3I̟T;(]h !=cK.G"qTݬ:WhߙtEբf pMAr>Sc_lo {>l-3aG:fFc&lツ">40qg"ܛ5MC@EeY`d,mVSvƶ Ŭfd-j:2Yٙ+p-6wepvn7q.9dx7N B+ +8%߸6<q?!LH?m[i>==ǔqh&r'[{bN@FQrzςeZQ(KZ@rڴZ֞gMKt ob.G&7; RY;*!Pe %p"l Hi"dLp,sPt6K*x5 K֙gP}Ub{=Q|DΪDSpz$ά貧2>g ʬJ\\#*$]u%%jr鞡ZUB"W%9P@c"YHal<יx`ʧ۸Ye9n|bDR $=NȦj||DĪcbcr't J`WWwoHX_$%K/?uJ}wNI,4'Rr#`1(ƎGę$Nb²4cY:I/D`_fkOBiwQ\Fa"J%*%E J#P/V5xΣW;d$sJ'vVr3RdA:2C6*-kY 8m1Ϲ\ژ6teficd";1jqA3D68كS/Rիp0Wb ┕haCŘcH#까34Qs<%-,(VfOF̽=%4}JVD0UkMu]78LJiFjL V9p&ZqB%m0VzW"{<8[ML(YM6ڶoRy:,ظ`o93xڽ}c")h#eqþK Ƴi }%jvouA>8w ;i=NAc4۞F x'[n>I'VӘr!֋M'Csij ŪfB X.Z&>%%`3,jY9Ku>!zѯd_n!$+HffBQ:ybO#e&9+$dFKJx+3"^mDOaL6'1b6:jK삨3ZS4XWAȆ^|"zP?4F?ù*0un,ve0jΒ V [,K(-Q#R *,ZDIRg]>z^*,1}\/fͭ|2S'%<#) h͐s%I#(J3 /m\;`l' $ 2'6~%DH~ׯ&pŨv2 #RΉV\^A8 ¯vv |/S;ļV?JJ|;YAEs0˚a$ Gً4=u7";H>)iEiS^~XBey]J_;g`nޠyܰaA.0s䃶{qdU m=yVEU/n*rR.U@o^QNRUwh8u#beE|:2x*ԣ{5 Ky' l1GXQGq'迹DBEuj$Ǩ;YЮ 2  S P)K$k_ZwVU+V[usL=^="gݓ*q5GbWs_dXX/b.B"]ip\22EW˂~ה"zgeCɍK#Z "*A𩬫ƽ?#bID6C\ڸ8)պam%\B nuo%vUז؍_UC~=獲 nNɢGƨTBj PBpGW#JJ\z4CZ.68ϓ6aI"[GnYQnB ͅ[?ra+5~jp\`,k~j>a27arpg;l\?G}F\u}!iQ&_eCa %1MDK#18V +$N E({P>`>R};w;s;6$L9p, FS&VSOA[VVbmcIl̍xfջSRkK8:O`=A^䠥ǰ:l8hoV`6t˺ƋNȧ ,x{ yfq/#sxmBlSpO ؂a/h{2㣤pxVv2 \r]lHOq.h=-<-zSFE u+*euH$dwOLBOA?M&EK‚͟"B;;Ym8n&;r)k%Q6tq:~7 dJPpI[38 .a~ ,ح y&XB^N]ǾsM;[\jyN>j~5_"it|ɷCWd^셾٤"o;XQCl}߂֏~M'r{yãw/>z~ԛp?'yr޸ VY;e)П]쥧귽4 ą&81i`x~"ܧp!ro"4(}({3 1v9o#7)ړ+(S\~È|`g陛ܢ(гSSr,ͱݛzW]S&ADV>%fь-&646=;}w].҂#oLħP}aXơ=cs.X߷`BW2Vc^]rtoGnpv; `U+l{x[(R,z:~~끨. a*)VV.v.w易*xia z_~zs0H`}r`w/펾J=+z73(\B'tz_'w`<ԛAoϿ~INM{_,]}h~q~7(u8Ft2W|+]'Xq#_ç6 :Bƃ(Dp湃] YJfCx Ƹ]ޑqD0i0$;~=q';YH|*q7jCbO9w&X CW_aas Rz?{( ۨ O46[l\blw3C3I}hD~(}XeK)HEXiuEU&Nev)"^?'IN+EL7؞:n x6Kh}l7s"5,G6aT1qs*zB8|&?V[,gZ9w ^1L666*,||vX{^ 6պ}B24PBCQ EP ̅[<KqnS!u ME ÎX9ȣؙ*a(*^Ɖ_sŜĖ5$4JÌVu0BU_8Ǩ}, w-J>|Xl#\pU,Ga ۖ ziu˩1e!b !ѧS봘Pj!Ĩ@X+` x{9s :eC*p*[+t#6a } #8GjiP𴪝9zZ%.턧WZfK"`&MzF( ZoZ pδ2S[$qJ`LZE>7Z q$@~Q(1nݰ #P_\$%gdefE_dgߎ௝""B8t`hX*A6pP|/+ 9X^4;~Nox]hEU֊#ś.:"9z %ܹ -E  ˤ#P0ˬĖjGB+-vQk}fAc(9 S[p;k87=WV,1B) Snp8โr"X.&1)5{&Lb.$/ 41݉*6Wai"X|Q T"$d/  )e!ap'&2 >< Emnʒ*d |Sm ^0KC/$NJH&l `jXN#sօ^|wZSI#Mm!##02P1ii5}K6v'*"F ;#*G"I<6o(u^%s!JD-~$ɌD3,H oHG /afqAb捅q yV0F$qt!:i"*apUd($O {Sv@ߏLW \O!$>n1,:V4Ʊ+E:9\][۶+yٓC ugSNq9[)m9s$Jo G$P`F匣\dH?&nA@\O/'eiaц2轄T`vi;ګDQ9>8+f XQ +jxG@ ZcF K *ƒBuL Vi oKÍn0F)]ړ`, PXp8e+;7gZUF}o`qe*H dEcAy%gM uܪwj&ȥdwn(j 1Et87[jPa*^_mv[{c="#Q7j3w>e5Z`fAC.yg7jVfÅlk^N$"g]ٰe&6uw !_M7[Spwg(vuuh󧹟.s{#=- n<Idn}H"p#燏 JcNS-2_,K)>Gb<*]$tY4ѪG9½ŊKN@B/!W3">C=4 E:Ѣ7_߯uI6w%pXݫ(m)=!B| r(¼'" sJ]sfzwW5^ds23Λk{sTj#Es`:vu|ZNzphu:hp!WUqGYzR&LQqM27#68gdLkTJ/C"m0Ղ@HqHlcYD}I{40k2BTfD+ 6ef@([?ܻ |M(~rpEcCJ!-@<2\fSg]F@FTgR vY, PsxdRXmCh\v\h%"uZ0: <Tptkp\:U ?84)b 39T5$Ʋ7D<,3 `r@UAI`'Sx%)-Pp&<YRʠHM,l^c6v~XDwKb:'%\yoU 2wUyo_ Lv6B ʆ\J)˕GSp܎hxP5E8voճW(& <:ء.NcM ;|c z_ Ւc- @Ț\9>UWT($!^U@ʘIC5+ZB\j PU %/hX#)wk]]S.Ixx]@53u}v]EmB u;#{6s=2e3;¦țQL?]o JWdzI71+E=}SZT<Ŵ`I }77ynNn7ww{%܇nL$ x(ɭN\BQCvҵNOO܇;ŝ"NQ wzzS#QNu ɏLJ6Q]3Ɛ7 @^tٍCCT-ssAajs6CFZqqӢ'{^;`l)5m.vZ5eH1>fx65#뀠(U4]hxw.F[MkPQ:Qdo"bRHJyi?'1g6iZ#Vu FW흪syorLMlև=l=&pv^}7[k&\^NW7vr@h \t Ja~-E|fO>9<.тLȻ!u6z|mr{?}<N٪~P\qD*"ͭjO64=k/E켂‘s0z[k9$lk]kpw(p-(?࠘{N6i4k8PdLy` s06dC.jjWU\).3ͭ"R)q "vɖc?߷ίdavF~.ck6g~rystEJxƸf:R>B h^)J$WC w') e"Z T ´Jn5K\O捈@IS"gpK'fjO Rdڜ&{tl՘~]"Ctՠg?OHYv;=@wB J.wt(\nJ8dϕ5L[rMvdC &ski# ._~?"J1ܹO'\W_d"$'#LKY`ԥiRFJ1.n1{HxBQrPJGQnQr;`hqEHC& I8, Gwҫ_,+vagcBF(|,FiO*?R 6H%GջGN1sqZ9@.ݰKQ%8KtwZ,.I}b?}BP rTOjZT|p!VkEFkQ(jj)&If+%o=$Gc ?:w!"ɣg69FlDձ}vE3:1$b+xeQr=yMکzVSj />Նs-~<֢-t)'y冲cU#۫vmSJ[2"˼yTQpInBNO/'}m_RrBRc{3UNgjdT9s),(*{Mj/pmhnL" MN$pto*`-tM''m1t 7.4H2GЌ7 r`KuwOSFK?]&N+F;W^p$+CӏF{ߝ=U:wLC2+SZ扦ddNST*хH:;'hv׌T:N%2p( !$dqikھ(G>,ϓ<9'|cɗOe>k_{ 6I[*TwhR!v}#, )SH<T茛f(W \%$KhM &Юb܄4[OJқo!˸`I 2'kknXS)s鞞I݇l%u WYD)$; ) "r@\Xp)cfzz `q2MtjkZ:zjfv)ɌNBr= Z-ݦo⼋tDWY`cKY9coDw~{1e+5|h @jdf$J v|R=YZ} V(Jnm^c,'!',7=|޶pUπըlx_@Dzz:/tC2XRJַt{=H萷'7=Y\ Qnikav=#*Ϭ{?nM$ߖkP*!+w8+"jҴyW33h;:= @=/gymP5;Oԧ[O~mua,\8kVƶ*z=(lq4Z?^;SoZ(XGeeڱ[r UFKC< 8,i"!W1JTg*2_ص7s([ sqo_u|;(4S ̒|Ϙr^Žfo7)]nmzܩS ["bjn$L"2`V|׭"uVoZ_l$ ʠeO);MZ/EalF-4ߌ4 j'!)0X Q:@J 6I;SDheiA¼ )"& FH/iMȏjih(\D,Zɾ}ìcZhc5K^S;U/vi𵘪ew5xջ<.kruAvW#iDt.e2ww_>d&ñ8EIf?ڽx]1ƐY]5@9l,uF H\$bK:CTRd"ME%6RT^`v W<]9'yD|s|~}{xnbia.&˚Lą*ʴM}H)3T0$)N 3JȆh!2AD-J@X͋Bh,X)eª?!B)EIU\.AHSs |SZE("fQ<>횰J|W$<y%JeS^ uӳqX/)R8~Q} +QhX~}Rfb]/~}\(M/ߞ^ wSR݄ˇorU߷*/y >I!+N:0?9+Xr!%>YY`EXZu-{)Y$R|/tD2X[2*&òƈ$ K`L|ɑ*?U+(10m@5;ɺM~{z|`ro~7[ oSUL[{ ej?(c+25\%mlżW0o>Ž~^'53j0'7oQO5NѠj<1'ܘC@+5!gT~4oށ,'# B i/,dd[mGp%JgzG8xh1{NTbXҝ!e@Toir`m*Q{w>NppZ?~^(m$Qseg7VFj̎~L"r67j δEzR` p:J-u@n٫S<C b`=rv޶+(@izXhKO饦/ոByb1jt&zG-ި~=6þQ k2ڊZ( -Tq_)yw(ˈB247M"]syt.0|WZS<1Ƶn"U9ybRm0EYb,YHzzRut-4,e~z~ЈQ}н|# ,Z`1tgYM0BP?`е L)^R=v,}ϡ. fx8s?S*jEN:ɉn7Kny t:RW+.hk*Q$ &3Qĉ2 h ^K3Hi. BPkzWVcAm/8yPLjLO g1#&-(=1F,s*^TV#*G䶈ʮ\U:c"_y֗ gE.Ԙt'm{L+۩cch˫+d쨼r V?3!~\SZ@w<$V1Zʋ3vǐIV>T"}Z؎ZpOÌw]>.4A0lOeGxL؃GĎ?9j`u=+gNL2:?_;ֶO}R1ܡ1-3|%vI}xOZFH&U`QƁa&2Xr 0 jKը&a9 #T/Z˂" Epl|=XZr#@ssy7=FΘrKN;lY ~iݶ+*6Y6|r~eKJSb`3pn_-G!_X߷i BKb ^qps8۱ k+aGAѓ]&E1`$ C8a0 e%2,[rhyw)펣KNpc/cmS1w]p#ަS!@&B9TNРzp T|2jY+}u@}%8Onv$kwjG~#@Ќ/ì* 0wlHW2uU.pOoKQwͿMɕ\ X\Rvh{VO "4C{Ӷ<{yi6F \w̟<50&xܗAi:oΛ EMvȸf&)ZnSJ T\W%{'b@w;&aoӈ >"i'{?$6L\7IC-:\t [7:Ma;]p|yr'N㫲Jݔ<|ҵ0AA`jP1u)F&Y+*7nkfr=?ܤ᜹x?[^=YrXQGކT' X3 Bf O//?7>ox?L^ Q}q~I /U=0f򹘥罇.^^,;!xګOX+tq^٥aIdc $ *I"kU$TE:3ޑxL9XE鿘[yh͒qyyQ (FQ(; VuRt]ۖ˳l tEf.HIqB%|&ekԦ&zgJMDLE. #VĄP&vb@Ze$*iIj\\?|KKр-_y#<9Xuwc5ؙXnqww K8ΒH1Ҹ#wM(2J=Q{Ĩ(i5ULL%f&ɛ1LT66B]._ FIq uX##~U*3wYl63QI[RЎaMst@AUakthy=X_¡duNk)DJ zY_bɹTtRmia b"p"r6 dڱ>yWqkC% Wab#.B{~__Mc| >c#R嵭*ʛ7ݞi}{(K% STk uyOex)L?=,xtuq";e]bhyz<[8~tϿ<rM&ۏX~My % ^)f?5z|ؤxss=Oe pMNw6\|іG4ZP` |{v{%Qw?H_1l`8dfv%hPcYٖے%#HfnY*}_"1q({' Wo0H V=,"a'ϊ ѝ&I0};Hly n= Lgfopֵʓ\C:4]w"%sknE:Qy*7vd!V#~6($hQiTA\r)|dKs@HazFk*$:-ǃEiǵ?5 Е%w+Pt'Sȶ*РgXT1&Z^vFt>̯5KjvR3h:Kmofr kYRJċf١y (i~s@$Z`Z~vHo6gjG/ǃ?&GjdouDp;\NNwp}`)g68iH!])g &ZG}Jh&RM'Wϫ~V9E_ 7}UY't ӐڿLfMn}^?5YqE)osXL2BR-fZX]LTٔ#RR)qܳ@s4u\m*KkZC9ov|`ILڱ L^%gZϤ,1P[2Yg) £y ҧ"0 "cT\x^HsmjAC1 N;ͱ>8waONS80p))zrA~LynH=kX*$if8Qsj$YМ t0!Ű G\e8++K&ө¸71KD4+IEZhhf٦(ZR1׹ G]skdhiz)hR-Z`,60 |{7ooт2DF&f AcP@iIhUyBVѲȒ#/gMW-/5̅=>U9||Kюhu iVp4q峔*@Giʂu\ )hJvWhm@V6Uuш)٦BAtt+_yiLE] A,[F༓ytQ&}#Sl2ݥ|KU1[衧r-482@ra9T?j|8J'Qm,O92+ 8'BEdoSB'[UN":Nۊ 4K)1xG-Nu,zZؘ&fQ^3qpk : ŗV%Vv'z~JYMa 6$0?惘fAVGwɚg<ZGQw˿Ճ|SKQ7b}{?$/o9oeQ\Խ]Ѝ$ષz;zwyXФf"Nh4:1.up0. އ?r xe=iq`lTr| 0D>-ME2 [Sn]} ^&6e~^h7fSO}?_jMJpׄv+zk`-Ϩ4yیn1)%;4цژ.1T,eڝ3W$ĎYzvZ .92B!6ݢp{Ė& 숉~VQѺ6*zm"Pz *8Dv]]}I4%M-◱ 6E0jx(T1`$MEh tXJ3`5VdLpH,d!|#ae\K j]>-/8%x@XesERSmTfs1@h)Yh"iD2* Qv1K)Hf!!5Vd8)#NJWXqȗq!Pr0kX>ŭq L&-ƒ4Ȍ2kuRGOC0#PQ>.%m>Ž(026, x]P!u8 'ÂX 5AUx9oiaj1uICsy ʳa4iT՞ G{pS΄,"r"JYY{q,)#p\#!_Ԡb6/Ktx`ruSVUCvT|%MI%˾'WGv4Sv0&5!c|®4FReVWREh9L|7h:T T-EATd@ui߽۔{c4YI74%dTp\ qGwɣ:LUo?Y3kWtq.Xi``_ȣ݇73oY0 P2{μ0ia{g`|xͤ]<ΏA\4;]'yŊ:JZIMr{k swϣjHBs!mxEh0ޢ$N)\IU|oP1D[bwZb$.CS5>a2z$SkVsa&H4oX"Kgx(<9 D$ϸ:..[zo]X)O~\."JuF6{P2 >ciΡmOUٷ0 "Y C?9L"S"ud|[bwF|m0}xCOO8G 1Dt](en^8{H`kmKOA~ Ol%0JiÃa?&$f&5㣠fkdKbh3(е'Eݱu& "57-'hYA~X`쪋ChM}7d! % |4VJ4܀<cT+_O$kŵQ{c n}+ٰyxTz[ 9nr[6v]3h13H'2xgQ!Up?^|*<'x8n䁚B+3C|V5k?[Hx g55amET.??0a)蠸tskp8^wI\ِvvOxƆBȺ TZ#nDJR51L9Z(sfW+/n|J)RA=WPWcQG6"Β`+*ZDˆQR=&a՜v&1\=Mo1ܜQ2ᄎO>qv|7FknH&!:&@XGَqa֠r(o՜/ +Ω.~ܪw #ajdeOu^rͫ-^G_ z4AA=1L J Tc2NE#yX3htWdtR9i96#F҆R$BiƵJ>3䘒{{3R6vS">fiHBƒ͞4ь{6Sd3+CnTyܛ+L2ø.XoOݷ)GK0|)}1"x.|V}"C,>3{,{п>NgEo:µ~:y /_y3kr~eo~} -.kz$3$/̯2 ^`IՀ_ -TcF F0%ק$Y ҄ t5sUc} mRђaz4]2Gw=J1zl%H=\.GJS58XV毕[.#rKsVH)Zߐ9,)mo& 8Ц8(imjg] xݠĆZm܍LP(ٕSSi8gzsZ?!OEhJ k:#U|θmi>{'X\}7G俟{7Wzd?o+/wKj@k0[d0xn e܊DN 1[!ϝla"ߊhERi w.Ť4Z |4#٠Ғ2U%X>I'?<9gg_{]\o/b4>^^kꟵP3  ͂y {1zI-@&ۋ ov|`I@=YR5ےR*.tݸsLRb\9BFt W\ٮeRh38)U@/dRyժ^e@y+חf2=[O"mq݅Xg#4:;L~΢QXYVhkQ)gk-\wnZڸd0\5"FuK5{鼵XO<PEk5ӱM9aYgUOu2xSLt%T-R%{_pr.ݖlh9AͬY8J |=s!ldk "$gvD2oZGƔ.걒Kɉ>}9{ UbNF M7#Q_*sHӖ@@gdu$IB/ݙ a6vm{ "OkԒ,o$)U$5cmbeqdFDꚹ9[G,~T>ex>'kL;S ˡ]jXo^v$iNCJ68?K4~6~m3grZacsdrf{T4";8gUOe@g߱YK1=j胺VQtEV{o9ɒ=u& Y~ɝf !.oXnppsB?}5t/vzeG Qvnٵ5\"ƻv9q<ޤcWF3[PӠ~o,([~^kO/9HhVhM 8(okyk?Ƴzޝ2 %O| 4U!_نIX{G> qrN 0s(7pTn=o۳GwI*ǣSm(G!,s!^˛{],%;X' (z[{2w% MXmh%I3RY?AXˍNVATIOVG3,bUEx=a ͫ*UN2\XD$DN8Ks^Fr %J5y~82'-곳)FJuwmvv$Q$Q\RyOwzU[=z`N}X(by{:-{`Ѩ3 3vJcG6*B؇y<:i3e'3>=y|<.6I 5lJGқ(ցoyעxGSН/Ko;<eU y2oQ6D{gtY8bv8bv89N{WR7_\ǟ=pg# [&Gu8T<Á>&Ywy{3-Y1Ӆ{s C\{{;N#]wFvVaUNUѣ-4Y˕)j{[}B3Uw$aD:S]7ӑi]ܕP@!BBXGw-h"$쫘^P.9Nz >Vɒ`,s븑c%4wHk!TI$N[Isr  z,<[h?ty/jr>f>οl伴멜bq8lkaZ۽f8VW<_%TN*]6[Ob4E}ʤҮ`ҁmA]]ϴSp$(`@dH",J9/N)B5Z)i݅n$AZZD6Y®ռB@Y p$mDmrp5#QI; JOL Y/FBJpO!P FgW[q׽(G3{.ْ yIF݋rE J tvuQd.zR-)Wh]ۗrN2k &ӛρAMW/>鋫{Ev[mX]2yt<$h]D 1wD㽵4\V9T^Ew$AN_]܎3 FpL݈x2ƹ'69bGbz7z1颠bp7Xj 6̃ `<ذ9j{s(8e2b$Goci04.jdc\̺^=Vk1 S 'tZ+?x=+0_Y9ۃ-MWIé (g)*ig&+R*J1ʅ`27y$ŞS

x=4ClrW$US> qkx*h]JN3#T$f |'>?:"v?Il= |<=<5԰&OTUr̸AcG{P-9/17ٯ`KyO!ChQ[}( fs)uC7p0K\X+;+YQϊ|VT5ϊlN0+9~7`[z]-#bxOFaB TQ(LJm I >dHJJBDkǘ@SNkE>aUq"Qҁ:wDu=_M7`K+σBѓY [q[lBl$adL DD=c(|V3.8ťm<=(tE ݫ:uy*'TT9 U3UqEa BZAܶsT2 z^R_<%̶B/\4H'D@Fg^I%OF9.QIдB" yVrcQ"g9ɀ"^|#aBă0beXLu5>y!lJȉ^d9},"S2Jy*Ԃv:Mvh\Bsi y(y LR18Б E(2W)AJd6TP2ҷbׯlѷD;LFѼ2VyLlmX*5}Q~!oFy]Wqr^UYEHw5l605~ laG#v٢CWEH!O?7[fdLȍ)5UG(/ŠFSX*v[XѸ(T"[24HJ6q5$R<~[+6u73zF59*ʨF♜"w< Hg7OgS>p?䃸ss{eR.3|8ez0qZQ{b2 qD5\&P LeSL8C5)!rHBi-ZyW7hLn HbL a}Puk{~Czu.zB?[Uwu/fA}BJ؜5vvEd#_o}𽙀1ZxvR&HT;;qV 8 5:{ n ϷGs|YJ B[|q6ۃWLK4ZE>/rKQҊzȇ2((Z&L%3ּƓ/^{w#)cWa<}q7y)d_xz~; q ?1~rmNDů/"K#ck+r2h+Z= qb51Dв8\(;`ha96f|hJьSᝦ{b| 4`DͨG7?MuRֽK*=^:b( ۞:tK찊y]QqE 7Y#1ܓSOUR]LE)ϝ!{7Mos4?-Ma3zT:իƓ(a,ݳ~lľ / C_ET&"1T hgC#zvo{I ‹{Gpy&C'ӸdDpq&VEJ  Ƀ&)(ԤoTrN9UߖuSۑ-X! %ϽcRDM#Dġ}]s6WT~҃T<ںl6JEKנdH Hq2&nݖ am3s\)("gw)*:&lqre-DcEʭF;^FCB8#1g&[H2p*4VE*Ye^ Ƅ(탦eb@s^E 5!Y Zb^Pl(~չ.%9'.OZ ^Oqq>Zm]R`?VKpgo9=I; tF/L ''BWIA8 ѧaCd{ ~| " kwj6ֽt%Dd5RuXq/-A\%^<@Rb3@LKrLI6ҬZ20VR⏌+:?/n/(͏ '/5:.Br_X,4BR#iV4  l1^EzuG͵֟5Wha)Tn $ q-JF %TH ⌇nY[7l$C \Z*Q=45 ]sBN^KLRtxF+= &!YC]yODN)W} 1QYCdС> 2F\Ap ϿAK؊\AP8.RM83NHd0کA楚 +;A;`m!n d1"(n\'ZTq[.Д: N|6@ֈOHkpH]m=@c/"mR[\EFhԖ XFVp9EaMA%~º$;~T&ejDD!(1ܭ{'a$bG)rPMJL((%R'PZ] C"BZ% &7^fuufN !h+kxPsV RtS #/$$ϤD(., Δ#謁 .: TD8D{&[P1oXB0/Q:VC_zEHwY 3:b&93ʇ7F20C[(H)F{bDYl fDؔ|b2\՘oV p I 4퓆KAˁ*YR qɋ*#pj $ oyC)zD0"iZ8b-hd}ooq8{WM=Wn{;``EYo(Y5 %'ڂ\a``Xh YCJ% }u~vC@meI-uNHt4O9+\{Ԍ$ls0T˚K ܳ0#1$ BPQV+F'+boK%(dIPWY;w뤎7$+'z0q!8 %!TykYXREGz(YAXcZpgRGJ +%Pvn 7_qF.tm8?=XדqHרt*a *O ξb۪)N< W]PJ:ԟ 籴DBF"0U#}%-%:ii.w ^8F[ڶV[[o.гQ3G$ruYo.͍SwՄvRd'Jyغ 3|x;o *hmQv|ӜLk7`Nrp΄#\Ċp)HFmQ J9kl E;Ctjt >/Qbuyۛ|VEs$`-ZS yP# >&(;kPkPsd'YK~%]6̶>|.ҍNlMZ!),?3$ ^<\% |o=WBhs#+坔#jfW8Y?HT}bZWNLDZ # b_Xmyv^_6zZ"M - 2OAޤ63.^,|M$;r&L7{TwS&pV>> Pr%$ J)a9' (+A9@ A ](CydQFɹkB+ST{2+U yj9Ridozw]j\R &WӻyV<$XrJj*]FvS Nȯ%_.|ؐ/MX6y#hS͏ J=y#;ٶg%W@r[6Mvg*^E˻I.)2*ƪ[9["QVvVKnɑ|v@BpM)^ZTvӝ9{qԫMx )vNH v:T.9]SͶ΁{(D^O9ȶ^|67+i߿N[!7bY7!yG!WHP1bY=%nsЇ.&L}c#VZb.bޥ_9bHIoh1"pQfׂx-1fl $v*dnY YawOyQCvzw U>w=z뷏YSNyF|ޑ[WFpS {Twv<}nq*aj7wZlzp'ZqH=\C>s˧7_dtg G j8d=gb -zb@sif CYk{S-e9*M"G x{4 -xRiFJ}'HJrF_G[9T+S1)AJF3)0Cb*ሑUr]H.)2E48vK FtRFPnL#[hLUX=I[*16팁ҴR׷vKݺ/\DO)҉YɘtYCPcu.tT! OpC2|.=[1Loqf**19(\K(d|ՕmevrHYn L_,»ݚ[3#Dޢ+΋VQq=WnI#tT$Xia}Pb9z "}c2\}up}WL|GW{EDoA. $A :])t݁O(pptn9#tۇ '_,d wXXW$~=R`nAEqҿ_>#jLIwnyJ KqIwvD HUlE/҄i%o53sKa.Gqa_ ' njԁŸNH^nqS4Bf :V18vos}$n?3\nGDZ}v1"dW5rI`{4kX};OZw\peÔrѣkݭRS]XwEDwS`’eUuʂ*ѡ;$K5*ʹ\G w(3l>;v{^3.()-@Tz~P.OPN s<{'|cylpSlCśA|nk@[E}PY==R{s;󫪙ߟ~zńo OTnؐju8;qH}%E]3lK;㱪/}<)@) Wyuٗ= a}i/*Xy_jDMC|I />C*vw(y+h^?zEDUQ}U#:F5؁P1į^.R39_@9ܵzbf]fh@[3+P/БK$8Ϥ2$#pY\HLN_֡~W#d Rr5^fZH*'s*WdT[UL /'wҽzps97.<7ӛoJZY}"aZ)-> ,^ uNj+<7/^kUMYHu7'D?]b5q7ŇOFfz=NkLAqt׫rD͆{Wo'VA\>_Q51şT =-w.B ]AIXcj)FxKF895<` h\o[AC̛)nf7~ Xh0Y!ABH!+%U 52PxY1euHػ\a1.1YCJ]94oR `>3{t,20RL'%w( }JO2 C1<@X|QSo `yx6t%A zآ2!Ϯow}vdrmyV6#}_?7](3~0O375zj2Lp,mW]y/'~}:kLJjh sWMa,-*1 ^lp=bʥç+dq'fnuЫccI;lOR35e}wg8r10wOsW~,̮ݚ71{0}sJ#+=YqA"Octo3X+dwާ-h aȣ}~AL>g|X9~ap0gj{㸑_w. r0$[4H+w~X,OcO|3 t|qux={ٲTq>OGA"p)'WOB` q83 D((~T ^";]((fs-!zMMR"wl=# 0L4t`6d,uuvZ !VΔ5\Ill.os 댋HNpJP ƭn+"'rY&sh;ɂ IZx7]xc1֠UiG֐eJlk ѿf i'"'Y8ѲmU&q?`0kA*V:L6FUzS@(,rcs DD{vԜ2H֥e7 j8_p.c8\qH^ Fa6{{9wYM}{񏻇?Tc!}%L1\~"#M<з._/hSZ lm}\,\Yb?gY/N)c9\+9,L)#CE:,+7F6`>yӻJ6j@i:myViI{n_4ջa!_oS^l]igur$>ڑ}^e9QIH9Z#˛!xo5ዐ;Ixxƈ!פó)Mz (5;l0IM]s̐R]z=$D7bF)%x?\z"N;L']Zg FKײ;_~ڻ@CttGG:OT罫xv5$ bL m\ڔo*I*0PK/kP{֖!Z:ptzY Qд[kN_ w69oaO~}Dpa}3#@^Sr #yz,hnZrʥ@%͕l\LK1Yiv^wK=A;EDn uK@ "u]k-oQz:Y0oEA&-((fꠕ чܛ {PWiZӟW*}7MXGp%4*|c;^^@M'+nm'^CJI#eD7gYETw3M^7DFR۠EXu,n;2tYkI_~w0su6ʹi"$J;A  2U!2d(a2;{>g]7oh!T]OcD"vOzYoUv`Bkq?q!I3;qNJc{$ihWUMXڪjtJLA1&yGsyN;E%ֈsHw΀ҵ:IRGV]S:zН#zL_Q2 Q uWu5uyDmgsDʈD;mrpXtrRtgiNj+[)nCFKj:cx&Y3I`ҵRrCM:rumzM_q8Fo[quz8H&rqꁓpsw} 2mxIҌ}gsoD1SiW2|4O+n']Q>FT5ߵЇ]v=UW[ӯ2-eMSk!rٯ5Vマ|%T ΁PbD\uD GÍʗ(:i ujkiL\)1=sgb2?Eˣ Q!d¸<fCƕN3Ra2^Ć>NSBҠh,!wsWXH;;s'H&7U٤`+7(Oe_66_=_a7օؑ%S J|HP}W '=iHFTKu!(3N-$[)O"lp{CI+V@qPNkrf-lk `x'-@Փ5A{%pk)1s` %FG r dZ0bǽv\Kejj+ǽ2ő H'q\H٢-.6a#(4hue8 DSbjʼnrOB/-Fe"NA6IkboHCXZZpH\( NJ!Fpo zc|K^T$w[|,q`s&Q[Do(Zf-%&p\JF*4q"Xbski8)#87kj0t!iȓC :\Gi-<O!9xz`9*!fnɑNԐ[@XjEB " 1B[6..qQ1t R;1۝0Zթ% Cwm+ⳕec MZ _}qOivqN >Oߥ-7 w2җVLRDd]\iΘ`*G-ly qI#ȁ1.ڊ炖Yu7A\w7*:u*KjtG͉vZDZn՛v^NxڹIG58o.A.Iw߅ };أD]ÃnxdOw,n%~1m62]xK(Ulk6x FltM.x7FQ` ,W )ߦAwax-ۗ'&]_u6udpy_&l<*ߩ{Ӄ)i-kHͺPNmwnc:QiY 619K{W `nvO\{Ŏ\)/5ƥކ =۞LCML-CUU;_>f}L3w3mi 5h']+9Tz(.W)B[-\r8 皘*ZiQK獫@Z7GS<\n%= sp)IaC2_xkzQ%(yG?78o&|9&Vf0No/ ԪCm]ԭE+}G? 4Vo[RhC)̵5!4\pSiX"~jKE/ (_uwCCZ0>bTծCaGlu:,u10Χk=5}("ɃCB,H"3PxoٰJD-;{PWRAAɛN}M 0'*a IBh*4}?(I#;qRJJOG]/ɾt=]UW#zdqH!C# Zv׳^(_v y܅fe3۱fJxk5$sm$ 5? SJ8{Rdm^\Ϭ ɖ?'@ʌ*gE *sJF¸zmOo߼ɩRJ@'N)*%@ Gjb)ɅKRqx˗{I}%XGZxka<Qm^2P 3n-"wJ;&[[hb%k-Ix`!^>xW" WDKezjuN,oӀ U oAK^)} !]hhi$\b+CP7o֛~FH+XG9T,'>n?}º6WH4=G iK?1 yuL: %5xkǤ4F;Y,(l^ۖSGJ.^WX+eAE`38ĈSc3^  ǫ73.$NxiC'P"2B_`hu}5۫ԛv K:uɏ׼++&[ y^rC]eiR$p-+ оdlW 򴧊gr{EVcr*x&fೃEûl3s]g"C)up!9әKDU5u~5HsSꐥLJcRG/)+K0{A(!X8- ldNz*5ӖJC$Eᬠ.JYgl !R[+ͽj5qm܆j͉ej.)YV&C~6B-&na1vgAE(c 0 a z$JޛP V fofu**k"# n4B/5)rFGԜd")@i6Ð"F47joKԐ$:FQtchEɐ;וL<*W=Ff׌4JC|y9"p6>>h'AYnlЈS R`_aV$W^tGt̄YNo\\Ag*pWVGv.݀ίeZkg"g VٺU@W^/݀GNb c6Y5XZ{ɮ9:F|ݦ d=4*TηuS-d; T W0*+y[_ϣݵXvp.y{NZ㲴I+>P18tlt,: ӧ=M~a쿙'. 7fVa 2B[v@ AZg?|'JRWexqˠ 6DQeE?%~mNP Lw-$r)P7a8pXn"MF0K$]긊d-1=DsFNcnVD4&e!>}Xu{nս:$(DqD؄xWFGSzN#]a=QǷx* HY8uor%\~P_FUbUiN !pn(Gb@#|=K+^ βѺb: ̿/ e](]NVd$xzGajS -{0+ugOQ9y8Z6CۨX{2yw1y/; `gtW f12*_.{ _lK0; ]0>Ng(+t^){_DGB|~[gOX%9?op (Y,AfV.:ttBKPa˒ pmV;g}dW]B۷6$/-]8C ] Id[cMZ^#ۀ,**iq)LJbݏ?N?Qd%+˹AjՌ}н|]Efm+MP kZ29lajV1Nm!+⚱)e f0G<{C@;-lP'H;nP}ӣ a{L-xc|ƃz.WLtOHג1@Kw˱*Ѻ/(,QbhQw/L_GE)x|˟&uE_ċuHW<-THAjœiS@tGs5=Zs'wog KHX[ ;,ƺaLQ)SFnV {M|Z{QXpJ3 G9Ӏ_M 9*QBr3y`;N%F)$ )h' {< $O5tjWQ[1f|")C- $ X.`EU`)~~ܞqGV\9cq'ܲq8V3:>__ ^kA3kK~t!c wwƐf8僣v$ܣݼz[w|Ut|%Z9-ZN>ͩo)#t1y`kΧM>e&Jvّ|jMR?LW ~w1'b l)y{_vjMru"]uu TnV i~5G( E蓝e9Jd4?@7R|9DZ Y(q vf?1>1z ,G@ma B( -Jr; k(: 8ce,q,k-Gi5r=!&+SVGm_t=Z]K;ߖOt[~y+$eW6E R6~lʶ&2s)bV8=+%ׄ.ܲ`T Jr m7u.uYXwmHhGikxьKkOsd<|-RR7~ R7Iu% ],vC9 @&ț;@9Է6v=܎D3!Wv#wRC9 P"k4&KryQ@Wq>4OoCGlDDf:N{k{{~|&_]P3`$"x!@KuѫIj'zP6@iM`|v{LhQa(QW+⸪RŘVh-${;Z%O-z=CVc:dBYUu>f`^i<9)-^- )>JvW!*b،&"F+_}T,=D3x>9b|1$<H!a:0l_Jl◟ۿe2\>,,F!L~{&8N{4Ӹ:i@3*I$ 0kM2P5u``L#iT^̍:7yG|PC gb | ~T ⸒{iK|t:)E`<;f t ;Xr H~ )5ʳN)W\*Qx{Q^lz<)fd œF., Q_(S |#-kt$O_ iG j[p y*] J)=)4d%#: r_;_$wz8>t*vc䪵h0?X(Q ja-C$9 HnAc3isƱh Ipꁝ DІc[L@%Cn0 A (>L! вDunۮY d/EZ{2P":ye>Sqd2a2ݢB !kv.$&g$ma$r *ZnTeĹՉH̷z3qUm;q[I)N::^xI{T_Z׀8u'Y]&̆7S>1n=?qI5flx{v> <p?~j|r|O῿:ϓ?q'ί~y?xv3qe#dw%yБU*xYxç \]v\x&#s;i:> d$u+hF% mqLʫW.&$&p@?/x^==|vy'?iht{: uoo3'o6H3uqC>j";X`~:O,7 >; (M3g_7~/WG#,wOp|_ ,MQq,zhc!<{Ly>zE$ryN7dĵ0^̤AAP.'( g0x3+0&߾ꏧs)IGy>NX)gx+-*Qc )"4+Ck&betF@iMCz]ӺYΰz\Q qT(ΚEgBwIS Eo\_*{x&] r{n:>~&3dzZ,[ͤ0V (,m,RTYgY%:Ypr&Y]q8.|3ǫKRᬶEt7'ӒݙlާaG- kڨ3:SdUV3PBCc[ /ziW& 0g֞`1D.A8[o[zEo5㻰FR,4 0O"y0Y˷YGe {dYX6k)nM{\2djeT·U>؃5LS)k0͚jnm߳Q|<=6!m8c=83v6!ŦdOg$5z6r %:6$CqL ñJA P2b]ŻJA`0ST{.w{U^i]]b)|uOxjzR OH5WrEѝ+hصFLL~PnyP&wdaPr.xr ;R ʬ*MuJpƛ\>+fAnՁ#`VgіJ~Em]cTW;ɽW[neL mj`H2z v6- BVyWܿ9'\\KQa0S(ѳP2W'y"*O&=Mg2+uj.uLPKLR$uenaE$&2^W.D(JJ'`wwUr X(T.%̇@Y"rs\L@;r (k;(EP=%8y>$  /@FIIksoG.ъ1e.FŠÈM?)e_Q}2J3` 7TlE]L=ʄIF8<.uTzվt]@#HIӃw_̱f!$]q`/icR׃ EjECe0)lph\7~!9IV%Y[ Kƶ[*|TK$bm[$)BQ΢Ye5|ɨ&D zb5$ n5G`u|n柾k@3N@ G a'X`TrR LZg`Pڻ` bJtvftyTy)r)|T+eJ 䢺uS 74f|eB_0 C.r6JbjapFL 7G&^U:p@Tj8U57(<&'v3ҪI|@IjcLjSgw51~%?㛏`Uuǟ C/D&SfwfƼf/+qI"GuEjߺΤt,%kJ_ C1W"l EZHĕ(u^+C=x??8rkڨ?#4zR&TW%=m3e./]:I2z3$^ϗ|%۴t{D]%]lL/ۖuE?Qv&YFUm\j%#~vTX엫LV1aﶰ4otQv<ʔaonlR>l-ͪYLޤ{[l/t4.)yfe״SfaUU9 m9c;{`)ԥnx؛C353,-OIP+\4)8[7fn:9u=T2OCHp= @Q=L[i|/[`Mjp<]6ad-V"\\$t%?_|vWb`J!<*Y<$H91CeC|Z6aZu4g8 mB6cjcHy_zs(wDQy//}ĨG]sCtqp$CBtoD(f?@o/^)8a8:=ʇeBόfI*3y&ny!4dHfHr#3cI?NTtN±ݼzӘ,KiKryX(]"+9oo{N"ps]F#_FDnػ;=7f6 ;]' {aN֛q-+h`0MNnܟ~)-B'gu2W:rEHN. > eAלl|ɒBTTQxdb퇄ٓcv5bk6jmE3ӯMƀ `& m[kKZ[ҷi[zR}E85}TDk8H .=a0/@6ǼMu}i7.C\NFцڍX6b 0` FFqQǤ:/|Ȍb+aQ.Ăb9r8n8ğ.lacz8_!rpH]}5'ۻX= N6/ R("T43^"FdꯪUyo;^Yf5$}V0ẞ|/N:BQdB橅z~p^<z) AP@tԂ EfK~] )v{&6$[MĺF6mGpTv i/SC?z$80A2  Hbh!فu8hPަ;0Uv(F݊v@f2%0ځUYZbDU_qXc#7QA1BD߇9ǔa|j`~s0Uk؏G)»ꥆ0J@âBen3F7c$)K߫ @;8FUtjE7GW훋(f:'z f h`C[l~>g[jq%RZXћp5QXCC!h;W~r`MO~3if&m^JB߮nC|[&CP `EH/C'R=UK+ivgM꘤h+`v8iho|BLa8ͧWZ L"z_YBœN3*Qs"v];^h# DCg0 LjA@;AE5'2|Dǥ$RԂy/bbϥ@);ZbR.rOPB_ \I}@#{SOWWVtrvThR*5ʊ*n02^`Nh8JhS)0K"1t{ZS#"8FU;L ҒG%('*qe"7G V0L H2‚4Q oЋ؀h8hj KwK&GE )uo{Qw#6bշl<'hb'(W%oiIm4=*ZKdsA){tyCRЍVDPȰGޯpvtQհb@WhC$5KhR_xirX.P6aiP.8W-n6͚1A:֚]kQ}9<wUl7sCpN@(3+F%cM$js ZYj,j2zu 4MvdQ'K {{](Y>܅CH?/q?='y:BQs}A^n]V3gs8S@%תz'-g)}S ^w^5P7PDFUze *f72E5r iF**.x$jKNdN$ ; N@3Hxu>A4|0-W)TS^ifl!% X9cdpڬ/9@s^6nd|#;~i5%:,3K\(waדֈF6X#̟<`ԃы%H 1S) x[if81y/=I.6EOADÃB\<"$FQqx|8\GrK-7V@8PHHꙖ76 ɁNweNF1NNS'c3y- kdq֌6;k[IiM3Npw#SLZ$1R8cq!bU vut3(}ZUR r\OOj!32B6)woVKN@l8LIYik~dmF7rD+^ߍըY{ũYKmͺ`/ځCT2Ok RAuF޸gQtq?HkP9vj?##H||:ǻLz63d>fqn_3?oU>eQ͊W9\~NmxUPlIh~V^{ֽ8O~Z58z &+>ۛm,Q˕g/e Ǖ6G$ZBZs㯾 $ JjRe89C?0ok j0.U:^OD!w1UALh;[06vȘ/%*b #+jP^󝸊R\ZH:@|zfd8-^۴ϴ09[m{b?sFM+`Zn_ >~@qENc@C]$Zײl;gj2^b*rT bw[cgBLi't^.w@/R- tby^zrzkn%5npiaTT+k"mRECↃĘfQ6-Zɠkڤ滼~hvh#FE/Ou󺷤rGbpvq+"l{k*[6k㶂8Ob~/JJ 1y,ߖLG+J6F%,+O9՞$Ǹu|v7D!PQUϽYOT==m^B5N kEgAZ1݊: +9nEx\NӪZ␡/VS7c,jkiZ8j@Spgב23"R:S.hKVTruGw^]AÎFsAbW@Z!&H[;O9B*({Bdu g8=Q&CdŀNܽ1 RYmR9o}Boz(>t)yBo-Uvu?>}JOns +jy,LT66+g,@awoS +.| 5c#Cd-":M1ڄ 4rg!rD]H+RLFؒvОv+Š蔎FosyZoݷv+hSօr-C_]nJJ1h":cn3-U7o5n]H+ѭe "'QWg]q87-u F"آ/VEԬ7"JΝPew]٠dKTO1j$֓62t-XqhϷOA m)Zt9~_ sZ0ڜ}܎>O5{pʴ}gǟߞ]|ᕻ }Q4y1{+6Wˊl[\yHvVRi~)MoՎY|?r L|t&~x?+1 !]&N{/532BJ8 o+tL9,WC$ʃ49F~1*i8L L?| f!'.ҁ f;K}|if縘5 Ńo/{阉kn G74%ˣfuW@Kav[CuW 0c:H+}+ʡ\&g'XhBbA`N'm"ho 4( :WKcg$e5URQD-<'*gTTk"qnZ xXRCJD\TyDXQH-M!0Sp\H"eQH-6/jl~} &wXf X >J/UaCMm`vH2ZK›,%؇]~'~5Zq{}&+B牊M"~|w?!ٷg)&gh-߹ > \Ed S0 oDsC8cy6u{k'W1@pweqI~E`¼v߼Bmʤٔl}WuvRuUeFFD!G.4ֶ 0 U_)dc7/ hU$ā Y:84H E* NBt!ژ-^2Hk!B]/xpu,j3XFJBIY.ʓWIEhS$HVM< xt4Nط ZIZrD#e0zFRI5Q$$ va"WEh9s"(͐68Y'I4LF)u.)- b)[T}"PH#u۹(υK8"[i`_>ZHEX{.6ڒŶfnq :/dm9ȍm'xH刢*,mGvk$i5x.ff cqS;iz<奒 ٙ}F~ۨ "[CRh}g΢<Ħ_"SFL_Cez.Ѝ8Ybq4LՂn{z̳v c.xwzԦ_|ydh;1gȕ?KUago_>4e)s?4e)zxWSWucybACVOeNN4xWmCC{l\^?+;.t7Ba`dֶRAs3fޒݪC`4,iM 3VNS>BÃtjÐ`F |;iz~br?=H<4[wjN?=9ۇ'TeK86;?x%%(uS-u@%3Dz],W'D퇠迌u~=:?FbC@t<+<:e k(Hb9Q8H(u U0/ ;5y<`0P{P<[`?m_==|?ͻ??}s܍Ww- .^+???3XA^Sx>:듳V*():;Uu.].GEAP&a95&Zˎ<n;%/h\ =q% i.I"؊]`%hfVF1B-sV+v*u2+cyҕH.D]a`EV?Z8or;n=[B2-0 ۛ2Պ|Q"2b lG!le:[iǖ=q\+f4Mpu0x g\M]*[z]}-`T0i֫=]9eCva>s ,SVʂ(*2Do ,io! VdQ? šԜQ(vGJm=lUTZEBV j V;kGZMk*C22'uXfo/EC4,Qa)3f[W(-oW Ъ(lw)$PNvH4`1s2ttt.zh/z>YtΤ,l]XcЭx}FM;fI{a] ?jsz }MH/?_/}XZ19*4JVVA&[kH7ת_a J i[G3}EMv }rWwI I"egqWCQ.+;҄NG{.m1.E֞_xe`5Z R(cβd= I7ƙ0^2B g.R}%kԲG-*P6ᄉPz %s^=l.6;Qi%V`[n/6_>Ų,).Mt ċ..q]`"J4C[$Ce#B %}!=a8)鴩X]!I0OtYbtiG=Rj/$2Q0Av]^o.@ ޸ju.}-IIi9(gP$Z9@$el[o3[^J%n%v@NhSd쓘ɁYP,2Ғsgg:CbG$ m]MXwop㴤z%o_o-PPロloIJγ'||(8{Z$ܥ`?Yillwo>LoyEġ2j59><[?|Oֲ[,.cǿIlH2Z(YϽh?vbo!D4O'b6;xy={='&R-|<ӱ,6:8iQOﺇոJo qMV<ո+RxS:&iAX|'~2,^ Ū_ԉ ծzY6Ƙw Y-Ȇ9 &+K8K@m6(s ztrE-(DІMMc՞+7Er(m+w\L%zV)xR%9D%޶{H#CԲT!c]M-N5Ӈ$COr:JQ┹ XtSy6E9;qTtTu.SDҾ?̐)j[P-NmƆdOF xz3-h,UEm7͠UQ }CTE-ުBϙhf:NƃG0ۦyr pz0axpA<+F5)bW FaF"23|$^/Сұ^-Wԑ hKH+<=0N&dH Z[ejMc~[x5MKpG܉6(ā~u3֬ŷpU>7uI¡U빬_o[i ν[ P фaap\IDZ+;i#fPyTtm> ԱzS p\?_ k F[6} \nh=^Q% <P byYՎo< Ēw4b_  ]V{%)C{3nr=z#~,ACz APT‚frl4TQH'e0G,y ׶wkߘ3P"I{mPmczW;޲`\;pJx!ZETa:К’Ĩf0uZ}pȪX)RE{4 Ep2"YNY"Bê=>'u|B1*SOO~lĔjEAjw< Ҝpߪ<ƯIaL#lH$ףV+9>{`$IK!& iN&h`46ƙ{v}&gl H )TT.c09|ݏp˖^'T!D=JU d^"@1!p8'/O"z)!ddun7N@RI.FN|\#;4Ab8М)?"8*{}>*$F9xǪW/ )ϕQ:Au|t-< JL=_%?o\=y htq˥B;1ckrNKC7T8N)OONa k(r&v>XZ> 1bEA9u'(XH( "g q,VC1[L$˳bczϮG/G)Ϥ)t!eEw}{F @^pOoFf!<>^ֽǽ;Suh>5yua4u .sree90D42ho2x^>H5ڄYژF*>c|>VX:RJS:9G[L0\SVk]koF+?m-yu[NҢh7$ }ϐDKEEgD gs̙8y_XIERYutNbdA[WѰa= ||]t|>.󅚯\1ڏh-D"JsȋXj(ۅ,-; ҇ݤy7~8[A]{yBe箖E)*ndu^8PI5l>sc:M/JA"\iA骷6lHZ4a8hcz$a=F6+8="e (Zw$Oi= $xuUwEXNɜY0EW飯M4LO,= 0M3𳙦9 F=eE'3~z;۸30?N{o;w̗t908pjC@[K<8C = ,J.ݯb0>?9#/H,k_e#t]is s0ӞǗƨ@>s|Q_;R.TFxW#]j) z;"HseqZ`$.Wz>F ,?D|bw5%#&P97 nGk8X rw:3xL5c reW l(+/5`)@*cmJTWN`Op @+!s<׷2t U]%5o eN1Uz7{5Ff)p&SS`a'.ŋy#SK@4[$'ݍO]cD?oΏ:A8ĿŊ-C ݝӟ0̤#[RZbDӗi*."4V#$bh`B{=/;,ߠ/"?űm5Uٙzw›۩qLWeM53Gciw`Ph;,.h`} ~geQ%f$([ۄU!q(p H%0?ڦzcXAU*)Tn(OS TiOi=] 3u8Xw '%.nj@{Kj6(,wAq7Bo\ {=h A`)Țk3)HP:LrV)Lt݊ƠhPzVcd)m]'ӠVK -R<ѠN0F%y (ŠHHGŋm/2Aj`%3Ͼjn}tH~YKbI!g7wйMaqmF 35E+\yp\<},槵rxLBƔ{R֟왕3wW/,c`ʞzL;NûȳZgd]Ųn^:IWQ,㼸4eWQ26L a"Q+VK IVEJ2XT2uץln:QJspc⸪kـdƗ2g9F9ؒa1B55fʘ-5eCZڵ*)՞W4K Qwέ'|$g+6LFwkܻKNrǺUWPBFo?t^p2wMb0bTf(b(jXE0ٖ0}1 iXwmQ b4W]#,/E6ǍCyv2EGl7[;b[:S0T g" (.Z@M>p^M=1]ӤsxuXzz4aSq^a˫߶yfL5l-;cMrZPdWV<Y bޕϋ~qvYOe'H΂5|g0! F1uCPl,CLΗBSuCGUX&O8=`Ok*KM`RWr(%-s+$Ռ I< .!=@ 1KvV_k: o'yA3LfÅ_/ľ AMؑ{{XJ}gv~m&"i)!.[$ҭ4(,+ `qL52M5 $@Bol^ *M 5[]V@e`֤<'9GGR XQ콷:1iZ޹|3QDȻ9O‹ܟ9 /|LL9){y0Mmn܍]?8oRd0$A:`6Cζy^V|ʅ~r+VdbZy}q1j LmV챤by!姅N^lWXi"|T0|TGG?Vh1Վq+[Zsl8>?vXF72j_sl*R/hc׃n/\.i8ɨ\ 2@ms gZ HME֣bbd@V*#a^APרkehY}u۾cT}l4тe/maӂmR}xӋ|Z=|ikM5jaaAJ{v W:͜=T;PXԀBGar)ķ}{=XO;$kͯ O^5d*O; {7Lί~77?|~ꯛ?zǷkEkY816mc2 .:0v|wWD1tAx4Mxl~ "sI"̍['s/_/$% ?c\HJR,텖rҜBs/FKN qVe#XˁIJz' 4 (̉od:bD8̓ݙ.&i:dy\RMd,U4csg w%o~=${kp8r5Dfó- FErz42^zig}45*< FO`-XBwd4_?/fMp>GYZ` chg[x|6M׾Gҩ; _'Pʎu[Ap,a]*Թq\MV@?2\z1m$uQK-=f+Y$󇾵-ap0!s_$}Xo3fEEm;pyB1Q#ۋ(" Z3IDzZhaD p#\"zR 8>9|p$0k5ThrT,~Rhx\JJ|?ZT1W}3&FIj'>GU%U`Va%PC6ɒD p@3\2*J=Cq.Oihk"P1<5A4SҁPq/<)f {3nqj/hVBwJ6>H2 Y0[1LnEp<+*`uG %ݫxpUšCo|+r5L*'>O+I#AOF^&YUV#OyL>ѺPŰ>1HNCƦGn rmtj;*j~) 'J# RQ9Lh>c_df W+-0nIaO7$ow(Q@|^qBjIud>5Uk,Q!/{׶Gd2`0H~ X,v3ִ,ɪ{0*]RueV^KvݒKAF3R0!u8uɐ@CF*1NAC4Z/|MGlќ8qmJ.p47HzE ^J̔Nx[zkX`&£0TZDMEl#`!"R,y̮Yo(XX^;8GhQ`N+~^I5U*^i+ѩ.[DRAGS^hS]R@L9us[TQ1*5THZLjwˌn]N~-#Vg`lfFmH98CfM2 !+beqD6 }z҄!I7zf=:GG,Pt|J/ I~W%aߤwXoz|ptIȧ7pS=YIjayTo[.&?gR]l}]O08_b4+üோۯ?s/1݇Bie1]~+ }/o9} ?ɜtNvX=IvYd{-W;mO^kfx&l?W[szN+'Ϫ\B޸ږTO7W7 b [[ RLMV+"X[wI\ݺ7nlʺɗlnNInm1Hr-hWDNI3\ֺVjցqmeS X_Dfi;FmĚAi5w߫[rխ y&fS.q,'<&X_9U#%dOPLNk)75B)(5Z˾_i5RtP{HOP@n|RWZM(BZa'BlJ=BӨT*WBΗI![h+R֗n/颢 c QxSǺ*ArS= x|teNq:]֧S{eCƸJi jNʖQY)ҐD&sϷ|6[kH0c) i _p[6Z4@FZ7|Veky6dۜ~aH^fLVY^\T:"hĹl)7^iMnre} $ƂFQzreȝjvͩ2TSl`зRl\JJhα)MsMI2ƬM*}ջF2ex5JdM֣[ l3-ie *sR]NP'TLJRkK]X+aS׆SYwk4g)zJ:f8Mu`L[\MB[PF (X5BBY'G 9& nB :' 4^ T,{ lj"mi'떗EY5"$NW*̗dXx~gk&} >Ex|OW"[J]qj/,yL|އwe˻xRх_%g z))lQ^~bYqKEu@fVY~[ ȺV> *=ZmL>{n ۧXOFH!+L[ԙ땅;Uem}_NKwCh] kBM][ !=:o匂/ [z}kgϺG@5-4(В|Ǥ\^ +.I?eho.V$ʒ+!J`P1:WDһ@/6^I$vV~j֯-9)זڢqG iQW*e%MTk;}ūC * (`@R7mjv|+myo -YIj ;EyP:_} E+3!>,k^~&*׃ 9v M8a6֩5 \öoaqwC/ٗ/ 4O֏`; y {W>rI25T*YyvjyFʃޭB(vk؇۾t}4ⷻVTVN; 2L+< FHE;Tَ1^XQ <~t8Y?f*8"y0]t7hu#} hRr¡"E{NcD$^!G(4O@%p2[*uQ`sP>k vY5t -إD*9YIzpW&P#M%k_P[oS,δNKkj; b^H7p].7~%?3kuas[an㺅N(}xɕey%@F!8-T0 Δ`6R{u3Θp/#}KRŽ1NCFZ:\(r:,~4̜JAuKINc6Acgy;l*{P`?o4Ђ}rC}atd2AyPųj4h t3Û: r+mk?{Fz@8 ι> 7`|sR)Wc3*7*0Mqt@9\l..Qx_Ae$ZzSMavL =t+',,GO嘳ŧPl=j_$2&1 'uUD=*W$P^G? *`MU #5S}}|>\ .e7˯:lVٜ/'XHjds78,gDg[w2;%awo2㓮+2+q^qPn)CBINܑ\$aCOW֎*Ҕtt"5`K==uixW[g+ɥiGiJb"ӐcV =~ *`?csI*kkn8ŗTNT'INNN%y5ʱSg `VHa1m=Rjb⌆2^ C=˕o*mWsv^99VhYsHjX50MS ~" `ZFDdN9.T B:YP딲TI zT7eoQ` tDH 010ʩ2 Q2k-"3終@3xVvb^g w#y12!/qO ԣƣU1+<%m6hU"H,]0Q@l>\׳.) 2Yp$F{D&  7\CQ BI SSϽ*rp)+w"/K'2_/3aAfYFJGi(/]ٶf.o< JBC={rX~Mܽ@'DŽיF?|lXq5?!߹W+ oooD` mKq |{t/)lQTjqre7np.QM A}΃pF39>m"vG]dE9AX nkJ&7P@CԴ;nh-6jCYXD7$D[Du/6(Կʡ@I-]04zE`u У,p>ՃEFSd-Hb1}tXb7 t Ѡ1l6k5)m@]ro;g)du!BJuꐮ#֫NxI!Ջ_gNXL"r_RRv*8 ) {&D&E^8%ήf OFGg DrE iyv 5W5E-6t%8 f al1K~䉯쥓7Clx+^t.:dV4 :]}`.i @\btYNC97|g#e֙25FBAY#Z&c TZLt"_i]^S}`qT-%Wsa,+4HԞck]l׷ޮg] }V֫Sj47Yp,JQ U20WBnf0e\Qa4 Jӆ/+ݒCm~kT+Z0B֟ V5WB ]?ۂbL =J%3  CrdlRc"^hcs*C7biGP(hKBY>Z;xMsZD /FfITڤno&ou1?,Gl3F%&?AAZt>mLh 9V=fTz=R֡ G桀D3tJ 7@W)7a: -K#F% ]R?K뾢~Af/dN(x4J)N&_}b@EQ8&*Mjon[  ߤdҒִow4wq= =T}N1a, ^a)aF'[R*p'k`DI=Ynt/[,[w* ӚHtk7\@ o6)8 N9-HfZ(_aCzDhXCzj]O.{$b\C?\4ySg%D](1| ]`c,AjhOYL&cR~J[ɚ'^tv\<_'Oo7-cf;Ώ1<Cz9/54f FNd3 ^IxCL4@7^?#&V hI7v22 f&*Rq=̮]>۔]Uf=[$ DoH]Rwڠ5xI%p5P P5ZCдAjW.+$o6ZŌ pvF(&0xkeaN"|hjT;]O1Wגwd# &&>r9䒿VdBfTP*Oݧ0Jreu k2*9Sҗ̯cX ͟˔{rp/Se5w5]MRHsRޘhR \; (/$e4J__ާ6Xe/z=Z5;RڟVnUAzyeO:'M{)t~di|)}jx"Y&?&Q숒 DDA ^i!࢒ 7f up6]$ 1@yñ(W(m.hd\0ׂS40VO(eJrڐi*IA7E6irB S5^-޺Aڬ=N&sH2q6ެwAr:!N8-B9uXY'=J.0/~,!l3y)exu=RvY ð[/<{-=Fގı)tQ\N~Pcf.]oU=EףQ2aTMȄZp_^MGg;d=o43 L C #T MO3fP|q2?W?2盕*˼,aKpӏOv7w_jrSM)f{ʝ"\u_w;NrQq}<@!4h23vVY7efo},s=ŧ1[G -:$ۥN0p.L^>}[OFO-8*(t='ɖu՗?ư{&l֮/}plt{G/ {*:Ĵ21 SsycTKUA1╋A2nJi7D3ŜwAZVD Z&z؝h݊V#A o5цIKUFqkpN+M-c\0|Yf?zq:~P=}420;ML5z7@áz Z#'8C >G^%rZ+c}|JAghIPW8q3MGcA0b.Ve  2anݠ hiv z̐ݟ iI=R&8!uzGC'] 6î>"{R!V.WVø<3gb?ZtbTQHZҽ>8 H5tHu#ٯUp50iX0 s&ÓOD~6-d#oqjϞKߦ]9>iL'ߕsb6H"7^Hw1鼺]٭.[.u ѭ_MmLpb(9J>pq[շ)o{+(Ɖr5.Rя_V^%"gڋ{D%yP߷WNRSiMCq&χ8Px 'R7՝8q]Q٢ 6;@D:i%I We f0_:ȍ 6$wmI%w i_TڲnRq^>ܺT``BK4Iq߯ NJ%y34?O')9=oB֡>9%R`2 9H x0\p2|!\Z͔캭s`kj>fi 9Oַ|ސOy&v~`=Lxnwb]m'z2Dު}1\!CNCc!b8nb[3`$_̇IS͇3tx4\ Ғw_ytf9\*5 #_Z]\Ujyt[uU:{FX<RHPSUweN~߻Jv i0y{w-ދp2̘% ψd8QPé8ΌcЩ̱r^L;Py."Jp,B2J2*&$,נV}o4ssHM3+#D9`\stwM6ZXAft ֶlw n Nyj1I]"鐦V"@BجzCD 5;_C56M.6F0Ե}n]r1-u)ъm~M6vf|&mK%fp9Po]Q.%ouus,cHһ˫IG=cԊXӄ)NQG<ݫ=xt](ycXcCPռzh.e~X0v&g_8= 9tj6;j,5yOckEpX h+JEoFH9-1i5EjG&mDl-÷F4=~@]~?(5u 54*<ޓcv^<XtyFȚjy@csر k#Zkޛ) sF`=T7aX9H :ԱY`ͥo [ y愴l҂Q#pCjͥ/aʼnipTr`|]L'afXϓ 7:нaks':Nޗ~!w8~av ;8ۜ T07)s;uF[|&Qn;#\ae+wfݛXJtn߁u?OizlGXKvi9}˵Z0_,gxTxH̓9*;2/z_ֳ4%xmnB^FٔU u|ϻabѻbb:Ϩݎwt1tݢڰnQ6uоw^~[,!:x'~NN[M4ʦney7B(w tbQǻq{ tYo-zޭ y&cS"Ц9L % Ca1HRk!xg,'*…V3Jl&^XX"Xk& | .M̐dv/&_hr^t9|%RI.*gs0Q;Ud e|6 °5N^uZb10Jq~{խ۫-.V l]7lU( -4> 3'0Qh#d L`D:?xd%&01:2.ա0- dp> <7mrod)2a03Czh OC1A֝Y ~W qykFg&#x̸jgyyq 7p7g\0k0 rG-l3~= բ|jYF醪H(#jAA5[S*ങV)>t?I|mPhcr{S(N؆0HL=c^^7sH!-f赋k/3Ap\' IM5Er̼Z[Z݃~c)< Hx&Ǘ^YEC[>=Pt@sLx #qA:xaZD!A}nbϨYpGȞ\dѕ8'; ?-!w$]R>U\"ʣ;bX/~.,ynQW/2' a2'qj8jSQEm{Z<$Xã^fJzZp -Э l Ds^{gw?ͦ7NՅ#GVxsj7.>d^}a.V< <;eW}c)R(&ߦ5ZI3惐{xDwu gj|0$ 60}xHsv=n W"SDšlK1b+|DjsZmw0|N;IG"Yg\\uF%љׯ]~k?^_!\` WĤ,3Qj805BdFXJBFhm F-s8.ѕ/s>ridT7UG͊ajH&*zYh|K{bߩ;ס wN-T)>,m@wէBZ'yuS8Ofud9% 1DE,DmwL@k>36nīF.A 1a{^QL X@>2:-pd:͘F#\UŒjbSM,t̥֪ˎ_n,5'Ej [P#(F2ՙTI%c2Z\ ,gRxhaLvxyuT+3l ]/|ưJ{^  A*)(2^ NrEIßn·"f[ v"[':Gg6{~Ւ[8AQ]|,V*9=Y8%QYd0Y[4)m@h-kn]칎ߊ aNq }!;96mO9m4{/!G6,ic:ʄxްC],\69^L5\t~~)ԍ[$,HO㼕@/ ϶LvLNp;E5񜦨E !}ը,ff}M L~*>#q} NN\J ~^v)&V@1P7ûk0>7 Wg[Icai)|bjZoxRԋ[7_q}Wts5{ל1V{vj\ss}uFϏwwVݯ]Xn.ôx2cvkpw)%7Q0+8x0 Q^Ze={e%WX4*~M}>\eO/U]s r-sݰJ6= D5h%G0;q0ܘ?-WcˎFHą=*HwK`m~E?ti0iL8ޗߴYh>PJv[s^\ QIgY e=)X.ՙD\ +Fq!<_ht]7yEsZG.ڢ5g$^F :vYfD*d背^H '**'34vc+1`9-(cTVxltt"E`lEsBy\b^'阔U `Zm̷؂pWʅX ٝrx ?Bɣ9),^1vG<^NLtS#em< w5-9^4lcjdYA}~&'C|=4,rUACb`1EuRzBFA>!OHhrcyK&ɲEEL/?} I'! hgk YW[pU!iN^EHArSDZ,sz/+rk k/ǤJ1gw'MQq⹡I!(M4}63g|12YzPNŻZŻςMukDKaqU| sw-^G.;,Zn @F8 9,lg˩6 w4J קjއ 3gHY."F+i7Xh*MVӇd^zLBĘ%[3e3D̒Rm2x`*1 1[%ӄW*d#of,Szy 95&൳\t y B*'.+d&Rmm ȟk\e6c$Ő(P%[8A!xY mrFAL9-o7hZo/|o}/֪b<@H8΅+ھgF{/O9n&dWj<ӢʕSyChLi1zm݀1XR rN;Xs8u+hub!zBnUQbaeoforbɪɯ_iRjg=MP!~G4G5X%p {; P|L F#ĶY'z^jO>YO>9Y.fQ-X2qE-i? ɦ3lycw]-kbW8i)t)/:*yr*< n@uB[hsA}98{Hyyz&R!1o c"bVj-DfzWcy?#?mPoRo0Uz/,;~B{@J8zץmafaLo$Lٜ~@\Cj&GFf\1wDP+Մ;o0hql ߤvMmgX Ł*n=Q~w.?j8;7uɲL%s9̖{~&K&y-WgۢکF[=_P 현2o eN {H(ifuؔI\sftLDe $^̫ 7:ȍ6HŤᐥ"Tn 5\VB 34bDW(Iqk+k$LI+i2"T1UUB/C@Q&6jIh,cUXqcj6S ̔S.sD$ݧnW7w̠"4w٠<5O?L~bXR6=` S\%y !kIq' VDX8^"3if0ݎy'7bY)**`RƻY<-DHk*;nI 23 Bk>9`[kCp.{ɨf*^30<7ݒYZsw,{IXr _b|b)Jz ?$h}2++S-{Ӳ䙛gS)o*dn(rJ_zGnZyU7{^DpGJW |_(A'G(D4Bǘ4EtXqQ1b-ȭ'l!DNbHMt81Ota؉ QOh^ u6׃`wV>$TE.~t`&1Cv $. <j.Mza9@{ױ_`똁BE-C2jH2ۍʼǷ8 ZL?kfqhw fc/_YȂy;F 9SԻ17b:s=^gg=#ވVܪ8jYZ|~bp=#ʚR NRu+ g蔾u;VּKb0V\yC`LI nގeRFT 2ڜo6/9#l0v T" L`zAg\q)pק}~zz{wk{pJϯq_f ip4P8.muPbHR29-Za,R)+v)Nyo(@pWJn,0Z=d c' }T*h@U֢Z"9hmo:}X'fH]?GFh5)s2. }6K2U7OO .[SaB5Fu+ڋQc!:.Æd `;ӭV#dUʰ|)JBIr( ;Tbί3_w=JNX;i4^T6rDN&2**HkD0,F#*!WoMY8^isz3KέdsrF11[@)%2$M(Es_qGȓ2 LAq7k>L04KKzXN}5˻u?; Y;!;FMԌKȭ(5&Lb Dgtl"rO%^)Uqn"T֨YUɨp ռwmJX\``]$89ލc;A'}XRR=t!,yZyyAP>Zt )r1I[AfڕyX1'ΊІP :0}\,7~ˆ.FF [uK\y[s%h X/.`}^_НcX_h+]|mrءe_|ZBIs%ф\XN5!صb@Nq B#TNjdXFjYSD FYNT0g<QE:ܝm dwI{ء8*5V@o8Y `EރBZ,, , k1$ڼ %(Kb갯Z ۈEr-K.=&'A|R'PY 鄦 \ʭ|v:I>qG:(,`ih*؉Fein W +ق'LXO̊m ̳Ǜ/νXrBG. 0%ObZ!QdrS(JQ[`=Ų@ $ȢD`u(,yf^p,8 HE(!G9USh̨..Bz[&(@LFʜր#k >9iJ@sY,֐n*mBU~= ;}&htcS e'VZuHh>9yZZ@L:ZVP]U$'cgI;Kh<r10` J11㞣"qi-&Kc-#WK`s=@(ZU]lX$8'u\ЀŌei:i[+e1lDǺ͆ɐN⚧Ht>x>mzrz,(wᏗ3uMy筄U#n\6ժb`A6Vp>"&rnd@0 Lx7 Qnj_5gi3ނĀZ-!][.1֏KB98BpP4Jr{$sYZEUsY/Yl| źnt} 4}8L@Koo#RlTR+cLJ\ 4Tzyad2U8`\­,M>HRQg֞PD@ 0Q1B?t))8@i@M励svaikJWi/'RtBsR@`|r#%!Bʯ2Hj3#_N9qNKHt}YVNѡ\֠u뫟_>̖vބte mHT#֋S[W G@[< ޵u3SQϟSEՐzXwX/>uҭF5CF56#ݣoHWTaŃkV! pSn|"ޔ]"0x( O7>l??_KDme 2Q P{n'$58Ns*f>}ڽm_:ڷ-+VʠlmǧwϴU6$:ksF8E@ Qb>)Pnxͮ֋_(o]3Rpe>?UD4ձ{yK;YkhT;nP&;^D`%ষ?MTIl6:^jHF+Dm"%HXd'B"tWU@IJ޳pK6πT1a\O gj\Qd^KUVR^s[VʹP9}3I>+ YTpꚅVQh. OFuq~g_8 SYkyƿrK9 'rYXp#tbOUfT<[͇>.1/İ<~m򐯾_=^|ՅWeQJ0>~}x?qW~_WY'W>P/ꋵz~[PܿW棿.J> %X5_ _ >|Ć 5Q䗵TnP*]%K-Gӯs|6(a b-ϛ'"8!Q\"U{)*]3#=_nѾgNjF.mJ2:>=+/.Plba1RisehhrYY,.lU}T:grp|V;#҄r7]rĀiK<٘@}!WYa?wF|@=pOٴ˫W`]j6(RCAK5"jB ê(b!1KPh8YV '!aJ3L: ]51WHtN9h0 bP9:S3qt-l>Ľ"Q`*S%>MS<Vҿl#iZZbdĆ9'9[]z{t" k鸒HJv))eId \"-t&qK͑XǕ!ㅒk) 5j'яRoZ^G<35ʨ\ɝm 'UFKSt\FyZXiO@~-G#HزqWK#IEP4SK/_ƷSkDM'P,{/^4Sh-DjYN  .^Cܳ6éi ᮐa=,w82_-1l~kO!C~~=^]TUF"Q&|IJڐ!M)7Ap =f:]KIz9Ӄ*=&o/Fa>p1Iΐpb$ f[;o_Xpf]@rYx$#̯7#KF FwWyf`>Y@sv +C7* 0l,ݜi>'SeƭB@JsPE.#z^EH, 3AIM*3?^~hb3;1Ob[bx4פ*_:`<X9rs(qAF_h Y8-.3"-%Pp=H>apWq97g# y6Ubb8lիhUuq.R7hTdAf%MLyfZ;| |,n-[iTcSl$+k(6#a6;dbd[9Q;z~ dVfzZloCuIlQt7_`KW3p+Ruc'}p}= +;T|)!ᚓD3:gNL%j kY}+\P&[-fXc+p w`f$=IQD͊:EuhPyQh ^]#ǒ6e`X+s~Z?lW2>ݻ/|d^>~?W`i)?mLϸOװgz9_!h agx^ֆGIԒ`F)H,f֡dMŪ/"### bW?Ť- h)VWܘM Vwd~x ADU?}{rᯘ1u$DˬǦj|5DI=OysD $@Et(+ 0c(JTb 7eAgC}`:H@m*X+P  z.=G"EA !\/ɉr^[>UZuLABx?@H`3rI<+Qj&-tF;XfoHʭLg"9[A@μp 4~_1lBK#v1+1(9_hk!#x ݈G= )ЍV )׍) ƈ wQtP%ٽƋۆa+=u^WoLaűA]k2ru(LzjFe罳9G4KS%ᖢqq+Cw>yIw,Ԅ!w`<mN% rZ^c:v{qE.)҂b ( \X)Ki=,<)Tq#)pVHa%p r谭TK49 z!|q ^ޭnZ&Z>9/K,$.ep^h#L"6 p+bK]HJHRҪZ{,VX(B lm.E)S{!1ً"x~Uڭ0banX$9~D E 0s*TB;BZ(>1Z#OV_ k.H[GPMZɉjQ]` ʙv䘗lXK^bA 10  #+jAK"F 9 E0'$24(𫏄FK"Jh|p?}`PCho\Sy}tq| RI"] {/Puk,9f;4XU&*q͚%98fGc_5ޞ XA4<4ZMI! n9H^N%Ŕ U$JtO۾:"XLJ(05 Z.ޕv۳R!ykR1qdq2m.M`M۫=ڍԨʀ`q'2:e੆m!9 l lv\z2`q-<(WaRxN=s N a:S Jˌ c8;nWP)#$ *ޅ5*y 3C&.ΠL Blo|L 6i ˙fv(&B~˭rXwk ry3sԂ[|:?nff*lff檾-l[R7חUh'?ˆ)I~@wjq?Nȫ0!„ >!w,`]Kn5;BNމGj"~߭&55ƾ`}0JJ,]x1_ m_L>L~]8pf`p±(gb(OiI-'ռra~RW!{)F w;;{w(/./56PuB"o+@C=9!Gg)_72+JIއdt.!di@ dvH4~-㭄쑽dD+/"Zy)@eov(: Jㄋ<)H]0Lj2Ũ3k>J%H 9G 8?~Cx}Q*Șf݈~|*&k5; ?CW>&Gyc.>]=t#2['8nEeMVeMWf2r5 WUьXhF n6qq+GьyMь6,L1pvξ fw{ݲC;<^m[v1n:[NQ hHC;D"ja#XgGK{_hy2,I!-ZDb".aS;GzO>)'-LD=nljL$~ib1C,sPNYp(: N$˂1S:kUx&D!(O aTӿuF4 bS1Ug-Ja!.V5V+eW.^ J uNyT#Dudvn͝^1h jm;H%:d%ȡ 5V@mq['|G#f]Ve 4_j= Pc9d$'C0<1Okv7[1A!VKz<3!IJbB-x}ЎCfZ h^Uzp/DUș|UھR嬁Ǫ<~ݗFz6{"Ž:Q#xG->VK:Q(76[%&#!>59:9⸖y :Ma}&(DE]&#WB Rx_acmuN4X=Mhoi g3]7i -X' ㍋?Ŭ#/7WNr1JAб1UNY'<9OjmkFu ZJ~_)<g Ι4½4%ڴOoTn(!f\$(%>əH}r|8J셤e$l;KUh=gWYHH\; a~9V*;S} VwۖjSumԌƸ37KDO zaj^X~oRao%oB#s+L=L:R]f6i[)ghBZQh4'M!Qo[5N$"sc& LvE(PoPr;)}1KdB199H%H069w#M.|ʔS%5(i](M4gz5snLzGeQ8i匙1>D2iSx?uvM0ƌ 97*3=Yh5ۺ>hl_" BůŃQn( wBzSۻ !8x_O.ڎ&-a,jL07s22]~ gT W{0q#VPkz8md5dD]L#ͣsusj-Gl1 Ҫ#!n0ƈ-*jVVG1fE]ҏOo9fL%PE~}|p 8İktYս3\emZpkvw}W?H{F*U=5K4];虑Q<2FX#;XԂ#FnYoa9* &nx)" W#%&51zwS^j JhԸ BDg.-T4sO˵vI_jfډ[IPE=xʉ^H̝B%L!ҊBqDYa -`%XMJW8iDEa`#3Dc.y`2jtJk #!(W>ӕJP]TdВ#ΧR8.$sz0KRaEb/3. X.r)gQsIdZjLWkȲj"M)Pp?/?Wn.і}OwŻ}y{swg' !txʔ8M >Y/4O^T!kÀއ9̡r5ÇS?RvdYiAzK5zE#wS Zz̧֢kXKX\_ y"$S|Pɚh@햊A>v)ʽkfD`73mbtvqo[:]L,eI%m8SұR-[@|۴mV\"LVK-z)W&h4(sAznMf1e! L)rj3 cGZ,Ɩ̤B*FwZ+S/1@sQ61KE /L".)/źpȑ_!mbfEG si0liYcB`•^{\xU"V pi ℑ[$'~c) 5\LJ0*-B*X;*]V0%h!ԑ;1ŖQX.H*(ՠm}%< *%‚x$^[BwkgHI9M) R[Wve& 5ZSqK^.q_X "S`d 'RR$K( +FEॗ&$*- >3##U0ϧvq$B^G_ ,wqKg"L I._ y q,r4STuUK=Od8@Ahh2K$B!)rY Q iVvX.5ue$ܰvR2|Dr! DGG}nPQ֐}$»C\ [PK竼n|Ā1=t O\s0hJXY1hA >9g+ܦ+M]9˧k9"OMUu FQIQ謅9 -Sd[-TFrX*=w%>ֆ1DdΚN&NHB=Rð,Z 7ΰnہgDHKvty΅ϭkBӍ?;@{A:p5n&=Qo] RH3D!sz-!.%G@304:qyщEN2[B5ky|:">}MQ`ԑߛ+ ma< Dz';.\NS3]f( R$H4r1|W/ "MțCUUC1K!GbZI!F iYpp^gT*r%(Z,#9VB#ɰ9B. LA y!R9ƔGE۵#Y;́vf~|w\8rZ\Ϧn٧,F,% $`B3*cDžf’kXkVP{BPXe9<)iLSoA!0ĹQY$QXjCPXǜAEbZXDȰ210cDI(Yf2sxe#H9S(ˉ~Di_ qF9bM) }R9]6au}Z<ˊX0Vel PĠeՄ`Uw8%T|A@ene^0)9젌O$aAq)vlde/ci/pB8FWN&$7yne\ )e ) QFKi9 l Bh dVÔ.3?X|o[}55({rbO9mAFVD5Sd..tn'vuFɍ}I+o]qs $cbArm\rO,g5zxͬF^Nr7І'Oc5PvM&.X;nvg_{iI7h\lWȑ{vWjm6 r3K(v[$6&q1.HZe&niBƒBJsZv,9հ3lڲa_)41=;b;Hg!u]v[Mc%D 4| k?3 || ![M@z`bC Ͳ9*g"WqBVQV9(w mVem\ro ㈊tl?OW,n6^_˻շ A"#WDDiT45DRv{bsS||oN& \llƄp<c!m-Ut.+-}n^߲7HNa-սpR4P.N^*ix*cpd>H? ŧ<"pC)[5ޠ_JV`18nJDGU68Å%N -~L`mw?u꫟?@d1V}H%oWǽ2AE Z =ņ@$'"o#f~B&mڟ3M˿߻cFo4۲ >YΦyqVCo'uVwYm6w|L@D*`(":SZgIEFaz($т Xߛ{ljf>}ЛѦA<d}3ؿli~멎~&L1'.(5jƤpXf\(BY9^fs2@P!V5݊0S& ʹ.B aI! F4Z#W)Fda 9aIarڰ:F,j`lqA/]+%HX-]+SҵMВ-?(5|"\v+]+~71uӅ5COX#朆T1?ܚڇ91t@lLxTz+Zۢt#׍\6_RS>cF1KsBWUʭrj6ٔk"|[7\CP 10!l3?sZ ||?_Mi^VՑk`4 hTr@CIIal |qA2L+S4/Wˋ9ؤ'J1LWo܀(*SCo8z XC+y7m$*@'*tFVB= |,P^.,R-S7f g!; rs2q͟5|~w(9­Ἷ˳_;I^p7c6^nCt9}+V;dh?_Z?&-?=0~ d=2ָ]dgW=}=~ޯ?ox_5{Whc.g3oε}tc\gΥ[Yx'mUg=x~ ޙ|P;ѯ X2@a@)&cs3*I6XT[ F>}߹ʞ|LI"P<( /5v Bc@2$-4εX+E׊@t^:`*8/8xK-= Zan )|P`9^}$88 D;A AdPaD4FޠPU|Oc8+.}ϝ|50>'5LBPa獱#g{MAg$&@[QAf8oHaaPi1Rc*̽ARG'KM)O :R@1~_j6?ٿ#nxS2r֎nfZuq=[ p=٦0aÎĒ}҈E8G,Y/ez (9JY?sk?IDD=T'e!y|D-O_JfE?TlF(P҇NˁKIU{+_ K [Zr*4_WW/j SRJLz*4ܬHgM,[vR;T]=h=ہG[,SWP!etZMLE|,ע`9+ ɘX2tnƄ 3 Qz* >?0@ O,*/h7OeO{QfN@[Ok 7T9O hۉ0փVEZ> ڳaepGb-⿷m͞/>:tE[/7YXB-I[6 5KN`s@h֯C&F5I )IuRW 4N YDv@&51ۅX@*He/ 6h UI?,/*ŒR۵OqΌr7}s1S{Pjq=fכG_7^?כg# yG'Xl쨙@!nTC" 7z2 -j qMr:Dl!iz1EHt.{/1dȱD-v'X0 xoppr@5hHJG\rCuqsGZ4\ P5j=w ՝k^4 m ;-SYZ >$ xޡ{Lx-V)]ꮝ)* :}: =z{qXNZh br3_eV1ᴇV , t5bb !I8׎jN@W/c-Bd| Iw7:|$+q: -lzE $&m96J"D B3_>7Sӥ#;XюDfOLBiʤ5|L)ƙ *&Y 9 {hbZ[ݽ4K5 8{E!Lᥛãg Z1S^Ĺ*3pJ#H}YRI%ꨬ5k :zK~xkt*u.V}qA #䒦|Pj&/aHFz-,WCQ>krS@D–]_W`.]4pw]]oWX -.nzRI$FUIvzث.WVAQъ;͐g08tX[l kc?oPxXI-QF?9uIK1pN7|<r./A…F &kFde)p+9! X&.u6Q ! tȇ"IW}$& x̔"NgXQLf23Rk* Ԏz }ǘb\YR?g}p託K IK)T~h&&8LYPozIPF9sCaqD"8*.rn2HL2%z^%hzL]E/FA ICrU.Q0_hC%05\[&z$= F!Q@D`XIAsk#Us-WUsUh.-G=~U *ryBJ ‚fR+~v(!' LTfȪzhqrZh1Ʊ!L &^,o[gŶȭ')zA^ 3at·I.Uh̎𹚸x?̭k?<*@+rvE遹ff}6 l[ާo.z`<.x}?Bf3ԓO1߹_`?Bی0<w썥LRE/]mPW k:)<&)MZ]wExAsJ'jCZdu:B Z/ij$QDJn1Nd)a])qz!taޡ.b*P{%j{;4 +r]eAιhpl9 zQi4E1_!$]WϨ3""/bsK ? Ә+B-V/jcLH2@aߵB $Q`]P"߃ֶ@0f#IYטA;Gۂ)M %8 hRS˄DtsP ~&ϟkR$$;pk}zWq4pt^} "Jaju8܏Ctr|s(Jyѽd0[/Բ+ۏ__V6y/f\O^~oc~3t;?}"oBM9{1_\GV7?^Nw(wp Y1h0ʃ::cMuNd) NOE]%|3اa!UKunŘi1UFDbLdjk5!24,͉̖4iE0"K-KLcr&25'ꀠ/TjԅpܿZ\~S-h梙n]`oo2fV^|fJ% 6CGkg 7=)O#w[^K?3(ڰxG!g䭱W.bg2~];lxLCz6oF6SLQ8˼2J)L,.Dcu!uK bgCx8+px4jlS,nG`\Q{y aLHixe:܍-DFD}.MLkLTlN 8=+rjs^{IwAKjBb`W%(LiSxun4'8*v+{~F>~anjEOH@lool29pYr:KSSnrBri=fR vTt7ec2ڑزrz2U[J*TՆXNHhNTһ#B8Hhnjuw٢3k嗷oT(0}?nq==z^竛OO\i3g7F~_g"hT2W)/ڤ +e2jRcN*Bka&lgDdqvQ&۷ɭYnU"jg-#7ͭӟš"w{讧z7ÿ!lwQ)~0Y) _ ]=oz5BG x=޽  3\>#}[RJڈ5S)쬳)1\BhSks7TBZtLc@Fi^B#.d:k-fӋJxmkdV~bvwy>ǛPjK0x7OPy{ec6 P{|3Ҹԥ=U]EH@i}zm»9z+k^ILĭ9|?dž72,abtUwNcW'ƃ{PmC^%X.b IUNeưp)3Ng[FhWNۊx vr"&'Y6LYǘ9 3d93#obP\Sgkq[q_Gи!I;ql}謙s>#R> og+E" h)Sxy[qkɶI1"mZlًkjv90WWٌ8 I~y[7ЇRP-ZBrA(@8?n0R-%F5K{ mzǞ7Sץk0QLLTn%AjQT;Ez=;w,l.o"R&kPJ@jΗ+ 5O%5w~NIƞP@< @ljٟ66E27r:2,,H٤)B;B)Π9cJ0 Fi66K4`mO pi&KT`2AȨom1NKH\h%)|nm5%-f[gW]UcFàa* yM d;N%9e#/O{TVvR+al(ָU2 1StPaUtx2 ]b(P=%iQ1l_aWFڨ5ap"qPo(><A"cNdTk_6yӥ<,UY@$>kh3Fg.]1wX({BgqZCZ3 ܺl;,؆K,+Kt xH>yV.S%u:qVױ} T:^j iM_w6U#w;O c!#Dd~_!&RK9E`;:P9)':Gl_0³⩷s Q)΍hMgeR n0HGoá ya޿U6`ww?Mܴ߻:B^x~$[HI]wCsfzyuG="rs\Vեr9.)B?mB2xܕ&[I*ryxopFIG3ġ{9jf2"~x$/#M>;_ KmnU$Re}͕B"*]?씥jPu%WXGNײС$HW(rn[pdnbG%RDDܮP^>yUKJӥh~k&>t<& tr˥Mr{qw0J̙<)Ŋ7:% JVbtx[j06C5w"I:jQ&$G"@9Bɚ+N6p{$|:(4*;HR6LW݇bB\hgg:#pFjkAY6 'Yz*:Du4?V؍cVVIum\KU8-Wo|M/| g_Oj~ tLkA+;o7FIa\3;c„c*g  "4 jpb4p)i3Ig"[~L7͓Jw&r{&KhvQ g8wN,SZ4G{3wfIpͳ_ W+aSTMJtzI|f"!Fmތ̇y c ? ‰Wdl"7xAb:,}5lLCcqWt.⿋--E ne{{M_t{?5٤;93ic(f7~#SV:S~1@.0X@<xW7b4W}l8}~ Wvb2L92py{bJ(P=p٨Fd[Opl 3V֖e[b_7qJe= `oSXg'KѬf<1jegK $ zw3=U#^}d20ݽ$  (쥵F\0q .-M[a{V^H):}BRy9fvHM1%:>om5 *r͇gP*@Wј(y dz_ރI"i"F2Bޘ2[k"*A"/i$Q>Os!P.&9X`$/"6p4.w_|:o0G₧Ryd5xy2oeSK)6fuKa,*:Q0҆Z @f'^ITHRh.6z5@HxtlD%h  b\ZF t*S:3άy3ci,k&7^q#U)7ޒS$8)`*\hMF/p-tB3R)j ZvTqjztNRԠF#Jp Wf:H&9b,KXMHe^bg2nw΅N&\З ]" 32ޗD%Tkw ^lne#2fbF\ɶ%V~%KwVR'wҝ~#iN𽳖Ǿ,=#~<77yʘ*cҥ}E["Y\Zm2Td(>S P%(̙F~ 뭟2dHFS@ ӨM2Btc 5ST!,E֭I7Z[w%~ݵ|lv[tnOBKmW|9c!EOP3J-ŧG'n]],ΑhB)OE a{睓JqZl1=$[ qE FT(tJ;`I9y9H-ĭ׸t^5hfζ*RbBPsLn*$=n^uR<ްݏG`rҮSڂr>VatǜRz^n~mEzɴ.Z8JچG;n SH0"TvYN^|2Q cl%ÄSZ)P)rgT8cr.zkHzS'+H(PlvrOm)ۀFuN|rFQr~VjhrUT p.rVYH\hoBWiJ&)+)G am`{Ƶ9 .B^| s^J[J$RZG0Q Մ9#I10R1V`aMAåR:@简lxIwӱŷ-Bkt&}O:?}^.?C޼~O Y',&E`ď~55d""2*٪7z?x2]hp0;󊿘i˷{à{4)LX0;ۋ=1֞$n{ `F#h+vlS{5 uLòHJ\QI!D>`t_`š  Kvn"O+d%x &ðB]so[0w[0AH$`u9ǎ;%͕r(k/cZ4!W[z/MM_Ԩtzӗ>p5 R&͚\R6%׬U8/λܣ~ %wۃuBʮi5JFJ^`y{EC<:x1>R /+[ tjϭ:AU2x[fH9ѿxD2v hJ; Tދel-n,cD(ⶫ\܏`Kҭr 6gĒVPg ..bxvR 4&^Pb L"nWҲ#0eNy̦3a vqrs}}y#rN? p;d ^0@{|}==ƛ^q`#p`}p*lxu/pF2O_0|?%FqkTQՉa€8̈X)lpș#r}M 95uke&i*f3@u36|šCXT\I0Ÿf ^qC)9fEU/ @Hq]$iUYP{4Br `+_ZBJlg)p}͓RF{p\M`z`†f'")@'LF9%D&c2js&֖R+x=BhtT<CI /砝%1fˏM+8BKuE%O0zq ÞafV[%X(Nj%6X0)^3jG0=!%M2kT3+U̚ $M8xU%/ J#[ؽ7z­8(c ePxdϭ>hm*movI;*x }˴l|ùS;ibyٶYỉuzeB=s"&'4@CX쫷)GMR*+1{u31s kXxD_<˷: %u1s7Y5bc9ch*]t1`@hG.jENp7!/V.wm[_Cma~ķ(6m;Mhc8%Uٿ}ϐEɒMzy뢈%ysfJ6ӌP㣞xudTG_ =f |M`_{ֹ!=ʽRa[[9TM&+/')z 6sv5#?w*eKqQn~r6N%:|c 72*!}pk6]W*$VtE9yu<_k`ŴgrКAƙыb{jQEu 7>xWN~^ebtZ{㋤?g**A4uun^|Ou]+O1>4ܽoNp1Vma!^a2ŮƵIhoqk^OuMv 7uڇةIpie6 Xܾtԛnmģ{FҬٲ۶Nw@=&_<B-66[-]7 ksne꿛fT4!m)r A+{L⃿?!zZy|_!@ۯȥY6|Y; ua<,ԝ&%Q2h/YuHB{ЉqƔ*C{I̥$TX w>gy\z}NFAϦhn,O"beQHYG]I+wh띄3TMJ:HAobor6ʺr,M qHǚPVVD[MfF"#bl >DJbż$"9j4Ffak-b:^W a ȩ|r;R&(e[19 GQ#aDIOGQHȭ EDfX"S!%F%?¬3INO~yv7BJw\#$@T {)҃t NC^DLy0DIS b Sg6 EQ 奂D>LHCPșyV+5L&D.1`yy4\\S8B1FjG1ԊPjRe9UQ@=r12a K61i"6 ֨V9bx{CNF2^Y,@F AkEWE FnOv]g;SkɭTI7Xn3q81ʘ8Ē1-pWs0%q =$Vܗ[G{O12rkNR]ͱu_*%,IR*&g*~&Kd%)b|DJa(' fXX. h[4-1iCj=-wZN@ o#2譔o y-ڠ063T,*,{m)A^ʸF(R1H3c1"ޒ0bHdE"TDnl;fq B.{{jS|IIϫ:i)V 9 0J$؈SD3sE7׺7rY]c!@L`:6c'PDyF/:%I*T?;q#N+*r"^ɛX550,A |DnZ>mrG1h*-V9PL J1Q3FT}ŴB c+@n3m87up?~x?5krJm߹~:+Mݻh*m~i4CK~g}=Y}- rAbS%p烝 F5QޗcOS%=%t{_+ y)W[H[+9 [V+foOWj`_!ƚj0$Xlo/w }b8[oDGreEKlDK"Oarܻ} .5fx**ruHi\Vi2g i?MUww`U%fq7z6>ޙQX޴V1V.5Ԋ.}KfMDUsh-&,0mҐO~;Rmh/|WnHuQ:Jɗ%3x?#7SŔaop}٢ɪ-l(f{@9x:Swatv^'Ajl]yռd=rWEDB/1䞿E?`R\'6D.YѫQEPq)ˮHIք55 >)w5$5 ܲFe)g Ÿt?f^_Mv9RTJźJV;TI0Vv )!_XP?|3gG]hpD[؞ 6@ Fgbs<~1O%5ތ{Y?5Wr5x@hcM#64z(Ŵ?k&ktm _Q0J{Ϸn2ܷ+:Bҩsl+dۅ7cjeU.-VdV{aA.^iG0=:9~E?kq_>}^C&쪥B^i:wi. 4qR4KnEX|J54FNi|ikHQJ OI-hEh)z+:ի \|gnoVQ|֡aטtgSbUiUJ9 1@ jbI77,G4ׂ1(,0TiI_-$Yi%n~u ^&XAvm4( .3 ! WI1F#⇶8+k=k=IR `x=哋bv_5>(v\t:~TiГt}haGA5Bq-FdU~_`agQ7MgnGȯ^O9̙6 jwpx-;ןLrf=3gJ_?ϫ=wŇz]w0.DvoJR=<GݓG9$_O A9uS`o_YvG{֒РC7t#}NK-֣J:-br-BOe :|1'sIb& Om|;1Ls5z-^zxgmƍ~_6 c(ӬP]zKu\A$wUnfhF#߾ǹPףw..~2鏿9ҢVq #gWOC Bҿ:\1zϷ:IxPF095wI?|N2Aa}>`{<og02iW`NyLI`SqoAJtoN~w nr{om[E4/*mzÛoFU_5TniM %|=}(Վ[09,}:scE?M`uDs*>FE6E!^8v=PH&{[*+yX~,zř`u9* ֏gX1hD}5hbO5jE^ pRe0Vd^\ߎx _y:cx}q>W}}swk(uUE&<qi]t"/$(D5DD{z& $(NBu 93.8rA%%-B:1Z+-:ѥK͵ ޟ:ȼX!k' 9 4ٰglpƗfRe/UijT)ڷ7UU ]EOo}$ƪ-0i6 W9 o1>!p{Aa~ !a.+@RIlc@ȑ@4WpkMx|\tZr}; hliб2-Zi}#KO>=9U$s!FI' }37H}0{Zr}wq\;bt E]YH%¢Q\ > LsAIŚܸr~Y 9Zf^+G!!FƣBp'd`U(l0R49b& ن-mM^_z],.1 #|uIpy0:w+W%s0WD=&:Zv {#" 5DqqضTd-dEP51dE}t}M@5uVc4W7A&]sc(Ma1 aFb b༘5i[Ffz ՛F1k%.yMcNM4*KNT 6Kbw Ɂ轑94C$-do=Rɬ)/1hU'w_n72$H14=1lT5?MFI﮿z2pxy}[| gߍ|&3tȾՇʹOgQ+[Q_[y wFMTeR|>F>Te?rVCVK^'Ψ_oOb9O"0ѝwX$3ÀiTTIA%Y4#U%YА yؐz3eXEgF]6T"qJۑsvS{[.w\[hKj)[j1ױKN&:g) ԜCu='uxѥ%7׼_: FIJ]QAzwzC䩬̸{>m*.= D1{DW\xyj1Yxvjԁ$N2evP 1ݺ Gtv;x`ڭ{ }vCBq$SDݡzM =֕9M\p[bLև|""SEDL_j:_jZR ,l~zU~N%WÄ©]?{2y:ȺA8{yYr(Á3DDꨣH>kl`(sKA)42|j *t{)1 algn9V UjL\d  eo!zeblϩ"s':Q3.}.i&4Ua@}ބ黐/&T/G0yU-:d2_zp} hj&﹓svd<.ge=;YSd 0ݿupg+RcCû~̡k0=$>{[yx[|Bg^Juhu@mxػK:#RV ".c!={ƨgqb~ v8rsf*&wCne7 c{tހj_%øTצgXN n/Ƃz5ؒP bPjul+WgX"MUY^g1q;2{O6gѝb_XOڴ'7__YOlݓ<ռq'};y5RjRIkC8< txcqpG|MɉkWg:!WK~A>7S[qw7{ Q;ݠ_kzixj˫N\5kW2`=VL QIjt/m ;`)%CȄo M5 "s!h`R.T givdv^("^t~A1D#0dxT0C1株9,MY κWF^]̪,nn6x'ǫ}]_UL+[bXy"C*8yAG`'3ʚӐ! >+iN TR2ze'ldZ)ڶSiL "ֱ6dB #=ֹ~9e#G HB(R:Fh&:oBzy΂ŕ QoSi4VznRH0Ø*+"HD8{@e^UdULV6MPтO[vC9Ixٺ+D#חco>R%" .D6FgVIJf &yD"., ƞ6ZђLsAIZƕof[!X x.N\OTfhٛgnA0 $ 2Zk~}IB҆ҁgRgV9ɻuY:/܎.Q5ݰR*p)!: G|^\,BJ]ϵO_?j88#)"͉=rFSlud/gO,e@f%7WI?Ƙ"yw1g$w ˞)~ '/ ~9;{Zz.+~s'$в=,JTDt6! iP_$/6DH/xdƺK^9eB _  9Fu jL [kFjOO +lCpo>{:ԆMjaFnC4GQ*tn#a#XtIZO=X^#e̮Ol3"QIj`lC!zr@W1+39M-Dh3EhPHK CL0C gP+aa4҉gNNPK.j`xe#C- e FB 8s1"5%q*LG+d0IZبJZYz?}#_Y;7hBM:Lkv gy yI~jHE-ְ5dQ_E)|BѠVׇ)-:OvU69"{ya܍wQqá?/o 6/nd)j{xA?~s)eC--D^Kջ $"@vt{\ {3c]˲:0e-䬰i|طVǜ+>c@2㡐Te3"8Uе5ztwcͩ'>r@^޷Bi5A#gzH_!2ؙ-mfFDa{ 젍q{eBl)Jg}#TE*iubGF|y!FD6ߧ5ƒ'^+g27cHyIKn5unPsr?'ϺS$р]Ѭt埃[#L[݁r<{jnwnmo`xѹ8T'̘CkΨHWDEz.o#< 76"Q4xߠBwBiyYLM@ JcseFxIP.SC!k_`ʹ^]'NϿr/nKJCZ-!ԦO K!@=d=z+Aj-@> p]ݘ>:[u0;DRp$TJנ!ٵBkҵ`BcĴ58XDYƅ0W H4AףԈ@+j14N(l0fՄ ֳ>DI>-"|wkX@"GW1TgWY@nÈG co^p ы ָhS#s; }ZCx'nf!`ehwVSҖ(V _\ &ՠ 촉Gbk% 1}Z.\tJb!eI,iRN$TiwAS,P;(vs+/<̪\NZfil@.&,Ш,t@UOԊ_IP@ʅz_2DYosJLRQV7z׫(W=V@.F|<2C0:'%^k;6AGO9I )ؿF, exRD:P oB vG'*x]֤sɬؿI{l_睦eb1HI+%tZ;l1G ]+M Î!MHU4:b4j&Qx/XM6`l~-]68kɒuDV$(F;ڏlbYCiT:%hRPeҔ E ]o[+gE MIE&f=5# ȡrp;nOt5}*,!˖p_c{2W/w*SKіr(t+?zyFY$B$*bcC^iGHmȅ\V(R4p vÆD39,mH G"o#P/b-p:uP.cvkCgSa83eqQ%[!V.19*93@fG]+cK!]$kg90$G楎ɣArd칊"GG>%j׽fi#>B=#cjnҨwW滩mٽ: @ЉN Z3Yn M܁D K}Zw8= 8ړ =*j˜)= ϥry?1R#q{in {^y\ O^R{$VNPY\ll+xJ{9I4< ^v\:O;cn\yϵ`U"J@ТANچc5Jiv֐nIK%7 j ]+_:@lG k# G!`z\Z>tM-mU@ [ fҬSྫ?vYx}z UK;˹Fyfgy)C+ȱ7_}~;;=v޾.C'L~*NU~21luL4zzg[! R}8N~=}#uNvި>u<;ə~~lrq7zߓ"Y3\.|yK=hkddK{ߎ&5ـhzzd|>==ɲnڇtuoHZ[>-wl'Ej*W$evŴfY0Xj~|ukxϿa1d+gxlU^y]t[j2FT&6墝 =S ]@emc^PQ@)iGnRn+uEf.8U~KufwvXMNڥ?\N^* : g%k5˂/_nZ5J̕g:'cTl/O̿[V*F)n,~Ypx.maeUz͆X TU4.2uȗ|g/^}..N*G# uKh*{,zkV%EV-Xq]rwj.(S&勊䂁_K+y>)Y~Y/&ϮU.xߴ4g=)|gpV]i$KMo_y./WD`:' ݻ4Le^kʿ;KgmzߗQ9ݹ~+J{k#dͧP;y\>ս r͢n7owLzoiNRT$#h+jr=I)Iq "oG q)7G#O>E|~?u񹘴ݦ=R;|c"b_WW>^}\-YI!c!FrIYT%IkrJ;sLxU7]{>YS,'tgMg2E'uub-.HNnE}=x|w7ZUvt Bc\}>|9%v?Dv"u9Mo+0'Z:P- ¿Y2ۅxe( JQ(A@eIyKY%KLA ^w [ivmЎ5DNn:Km-\d<ws xeyuݥb&ԉGM*5h2SM<3`v>-:\X>WuH¼è_yR=WvSAxEm]gKBX mT.^ =ci KKIbjb"\B%QEe):]+#Fۈ1 N@mvu0Z]j'EXBC)*&k =λzmי/:^"L ^U&G%)P9YYPPcՙKҠB aTu07 &BXI ij2fV³um= rΛʶ):4lRF'd}'&c @.Km_RҎG $x~nB ?R!Bh/JEFzviu!ԯVQ[#&`0;/_ΜTX q Y+@ݓs,˂Ȝ+~z њ#>._ȭxM:3 ތFYKb H !n Lf:Dw.egv _n(,^ -xhAM7{ YX 5-r:@Нj %{;NbI r'VuiK,nQ-9* A;o*iEɎWU TD2*yL K 7m-haFǥĄ`!j4 {Ľh+iae'|om+6 }һ{$]`Յ{IyS:]_T'`~a@[yEラ"X&]8=?Gtn$ch1>O~;Ee4'$L=e"Z0MԘ%7.!ocIEϢ:QܺTN,+o]wO7_״:dIVq:y!Kx +DTKS^71# *d>v$dmoDSJ+d" Ffi!9VC'YJzѬUz@IifB/yKv'zgE SM UEbmK@$I4 o-t7JqIadmlih&96WȠ&%oػ ɻ܊ϵ"}"y.J+ ;SV<^|1Eav]#({lݮ/N%ubqƺ2ࡱn tVi :C5FC]Sݮuz]cC5mrR1zƄ X0=a6kZ[=ʕZ.nގG:JJ=i`0'_տ?azׯzBu ˇ?C̦ |Ԗ7v/uF`5-gN\!6``LJ-JZK=dd!0k%[r.`ZyΦ'Lm᤭u*a:]]꿕_К-dLCl@'H1R6-]tb%-@>J2/WHEi7mP."I=M*Ζx"Cfس3DjP?rYpn׳LI^׳R:̠I\@(*dl]n$[dh m} +rxBNߖɒQDs3 I$-QDFt6)#S cFu\j dpd^le=$i1qrZ#2D'O@NI`elT4&Icm:ؘP'о]y1WBh!NՔ I w zoȁIVK=? obyrG0p~(_-s W.Z1R[BFs*:;UE䓖rnsVHuBT?ҫ3WFYv{4Z̧q-*hQ5ZTJXm̵9og ZƸGM'%:Ԕpln0@ݸsK;plG m*x Cԯe:܌"XF)Y^{?Kx-A8]07sI '7&Hո<7ŔHc*!ao|u x*.mQ4hXK\n1v>|߆ӱ[{ތl)RwLUgj}87iC 7ڳ*.|ժQY.'.>ɉI/25Lv1u|^^dxh|p$6L_R{qy]ޯã|tNۿvzuڬn//|ۻHwt(o9J[7Jyï|ѓs܏>u1#]|Ȅ#T +\)աʓ KЅZ157ӌb#\ߩ:A3%?&F, bJ,nI7 -Yۢ6H[e3Z@2 ^I٫jud#8yz mJ$Q V$eZsbT%8%TU?9è+FNiiFUfG"RYQ`ER;R j>C{LܞhBbR+4OWt3$Hf*0xrC,' _DFRR L0Xd5F x"jD:TЉ E0-?^;:nwח-yS>3.bh6&!=jn:׸+ڤ`όBeۑ"k,Ϳ!0"e$aѴL$j%4k=I"f* h"Z2):%z"€?D6V# ^&aU@J+l0 okG6=]Pz '3j?lԪDyFmed ehV5:vN':C\Tyr)G Qk:gҴZH 1k=@Ƞ;wܔm`hȈ!F9ZXO0uKEƘ><$dnoxY6~sH_9ԹZY1V J4}^>&cL 瀼wuJPEA.. O1JG7XTL?U,L<V*ri谷 L}k=]oh oR]xPtl{s ͌׋yq:;E[ Lӽio.nn2`;7Տ?005?cqE9PIz'+HC,7%\3@3fj(Zc d'&TɊ>4M&dU2|P[Ig~2FZ 3FuZ%`,J˛ԯW'g9Vzfw'H%v*fN wJ Չ@jH|QI2|Im殔TK^EF3 }?8۟j 316J<^A/0)(MkTo-ՖjTхG:c{ȿ˱mrփ*C^qs\d\ݿqBwu6p yģk`;ɀ?<Cc~yKt{! k/a3 &"yo':A' mVf)SCri%:dMiu㲓Rw'mJeƄsU7VuJ[˞8R4;OՂǧV˷!JN RH8Jvˇc8JFd$3X,l#l1ACy,+L/N)jɋΪ@!0H-k_rO7vRI]7^:*e?C,~0Bcrf9IBNEBFGc#kBFψbR0"Y% F,e@䛴FZN\gg+T D!p繠`Qφu%.QJOOK 7m0W.;@#xqL w׾WzG|. ]1Z^&Qm_5Ar'uo NAx>4_.ՇA> QY2qˀF_-Ajm+kH7;Y7  )d?x:IHS=P}JѓWeeUf~7V E2ݩwv粀d~<^a6D;>Tpr{"mnI7~?DӶ=RQ]#u RƦ;cjR!CW>}jH{{Qk >sD wAm:2ynArNE~'Cœ= a^Ö$]Ur )N>7 $O\3ɜ4[y'q>Ce蘤̓yWRwwş;wsDqȗ QVu+{}%$'.%?=+3N%Udϩ)^&T4J@]FGN1z%?'it tօ&RRBLpK "AAE荴.~LO䷙໴S# V "K_QG'͉Z! \ P2mQinp^)@46!CQ(CQL }p r4hd D~E3A8D9-砃dc^2-Z IP;1! *gU-.m7 gHgȒXM_%L=:@d "0tT(R^ r :D^>"K$ "/}R8|Q- -^o-'/>K#IlMg01g&/0J3Pt$ !lRBi2@II}Jh6x!%@ˬh."xn␶Ҷ굇CPXT l@7d "xP`Gm42-r)i 5C KG,0P,;A2Jrd=u JgB3=~qx2cB%{@#M OyT`4Z[^ΓZ igSLP3\Y?-n2\[ iv DV O&8o~z "~cxR._5>'#8@W'0o1*ԕNB}Z-{.#~j~Lz~ІD}cՎD]Hq•N[<RUƢGF=^e#dtlNPι[c__5ՆX~6f;05R>\鈳+ PqYhUEXRV跡BW#\pͨZ!^irvFr1Z"E.:4 w(1Q-q >1s(߳b@Q5>rqoh$A pHgDRcCz9b#vӻ*ǒFBiKQ(-.\VZ}y4ThZOyc#bH4:i DaAT8r㥡9;O؆ s f4E&}2EXv"PҒVrj[wvJ-imOa0X_DC+(y| iUvn$oZYB(! sL^r"-ʊ+db2<#p>0 rL(fTqHt)PFs \*QD Mt8 `0PYT1,lrM3D 2G-WdNvĪ5],1#DIUYÖ(ZpC\ya, oΙ80v]D xan ۨWm4%W_;thK*8t?Y;"Q!{sם9T\!;GgqD{U6>bt<1O8Elȥ5s 'erIû&<}sL9~yImQ脣|ӓkq;jQLk2R2O8,[˄#KLto#?cQ0B0oe0ZJQ !:yLQjHĀ*n[Q!&.ڸ v[4L֞i]ϔ`e}t]!νk3D'iF)hFHc:+[эbҪczit47hswmefRЃ71 61aG\rH⥵atC]sPXA9u:4r(+\,-YDJl h>8"6#LV|s:6U1bXaM7pqMߏKxzhFua0c3B FuT3b0[BoR_UL5!7NF!N @m3?Yx{}yDv~F7.!\>/usv~цs rh-?-ƒ'30BxլRzoS0SuwjENb}:G yem MMQSoz7[_Nh}e2̻4/nCXwnlJGMQA6mŚeFn hgwn*$|T NPqd,J( LWJ'=40;[Q-2C`V%wZ&m;@H7)DNua 'ˮ3/yfxOHEeg)@r7wKse>w.|pwuL~g^fiA+Ѹzz?stCGKE.$ǔRس0_k OyZFpTVE|~N-jiNxg;er&6բWZ-},lQIa&J}gaJI};AT">]*Qp/ isN!Y:g"[S+-ZyRJ!qh7;EIc㌬wgL)X1ↅ sU9GEX9h)z1Hka(mƝ}N^}jr^lx.Z<=k)VQ6#R6 ⠖$(ў&XFc*x'oOZ 5:mۛ^1xer+!x_۸mDZ@j8XLYAb;ȴ<3UPS1L69B䆲"0AGі<8k#I A zA:mkP]ϫo.+n.jߜߘ-5\$s_Yh\煔 i/d M(㼐tN#~.Q2z:ɐ:8HߪSu.Lnb'j΁G)5%3V`1Fƭv<,eT́ ;}$wFM-a.VN_\-{EzZBm!*%Vm+UX|,z3&0eDTs\TbPY@>P]XŎ47Qr>#9rA*i$ i.pz 57?6??ͼeeoۋ/僉e=&zMi$Zܬƶ[֏0{U&(j0b|f]%O?ݴ#V) 1!H=9tdGǝV" C-w V:E|?&#Q Ht8(`ƫ5FAdNs鯬,U28g#C_J&)DLCs>W[ňўF}L#)ǍKR9~o皡3kH68C+_eN>0RrAR5^PPs7Zɕ#DG濖gUļ3Y1sҦsmi~^ 8xz/_&‹.M/*B]SU1nڞ!iqSߩ˧'&GO'J|iOP1iвp:>h?MLsF[LSNpe>"QN 9 =:%ѢEIy#ZBvBUzBa+&Bͷ&iW)>]6F,L7db6 ײej>&7ȿCqS ܲO^Jc DnJ? x4!9xA[8<$A3٧NO^_z-(d$ͅ 9*MF(G2WΖss{%>-=h bBH]z؈Dq}0# Uy8|DK?|,p<֒3]Ex SѰjg<ܨq^`,SxѤ:֘2O)(kߢԄ˫WΊ2WoE5b$|, CaBDhLj( x\P+!=[QEFVN?2\'>  zvn[zr"2һ}N7M`7L0Bb)Aشۮ1!4RBR*YR!c\fZA ^(-n>ݠ^@ٱ+e_N&b a)tڇo߮z<#:- [s7OORWS\z3ZIf+z`XS׿\G?zO 9V?y b}I~=Hi ӧ'PԈdj {}fFRr,}M!w[.?75ML0ğri:=Rt)gv< ).@VwASrF;Jsnζ^0@=>kz)GDA ^$l!/{ {:[ѐ!ʖvD˯{Ub0joAҪhVtKߦ ?hUŠvzZ rXj}%5 ,=)%u:u*iȽ:9 G)&*4m;'CIbGR9` fXcjNK o> e}}旛sƸMdnM8zMpQU\3! h2Z$'X(JNw?#&ۨs2oMxfJT^IN׍3!8LfИLpʕqz눆Fkd(Fi7c 108~8wQdqYk54k3a7_؀-*pUګW T-;OB(ʀ3RBhaJ)S,eibXF#KsDLJJYe4mi]11AX faoyCl[kc N@J4LB!wE sΌM T8&t8J$tr5(ײRq&^Ƈ(14D菩8„q@p& x&Li nj]R3@K 2aξ9?Z:>c#\P&L )/S31b¬d-C/E@0l @+OuV^V0Ar&E;ʄmvQPK 4l frĠēgǴۉoܚ^puOu#RCDsxDr717%a.?Pn̺-qc"UR=gz/v/_iH8uaˮ9L1J+u_L+Ggɓ .] i /=RJ(%ӭp3}p8qrbRJ$M~Pox26[Vh7\ׯ.n*[3/±j4!kfȣ3\G@uM1 Hl]xD\aPh"@(41VHG/Ov=)a;yF RܦfʹSn2p;S9Xs ,גJD44_|ܛ3MrGx[ lD &̰ C@xRd{wʟo3:3Q`jHgf݌9L- fe1 r`un3~GD3`L"z/+bްFS!zFD^$0d %-#F(@irCL,wiT^ Q MI9KrY"(##ʌ܋A;q _tj#|&\/QB /ܠZ1 ?9( L㸘&ʿp ̘q?PPE /Ł9Z1LD/::f h<9PLMAt\<ʿq LvIv9fʩl_~Rĸ|CQog# xuC) Kꇏ;K%.}s3b{A>Dn&#2)䏁 X<ʆay=Vưjԁaպ3e";}}hͺnd_]_Ѯ9 c#߆Lrh]lq4:UyyXF޸T~yxhZ6⌜n'#-RpͤnΧY2MJxcC/ -%h*Q)ρM<ũ@0LTiPsKk>,U7O{Zq9jU0gV#Ijp7 dz9ح?E5וnyJi}k ZV:'zXsR]9zXiq?ۙ|;oiy'Itުrٔuc3|;oi{Da-ϵ6QЧw{@&*C>\7!rF B5p ]cPQ;egJYAz\ƺ@`*ia3=5l\z2ZۤU/CYdy,q$ތX2Ug+Bi@tX9+,DzskI:֠ {caEX]2`M"#c)ɹ` sѓ>'\~ܾ( 4`Mgu8)ACv } saT.+ ڴ=Tڞc*gbgo, Qslud8,+"2lp c2FfڦRg.HZ;2\Z̈o';. Ӳh|F~l2C.(aT[r <\(N¢@&ZKr!1b(*^`QQ)} LkØ,KT7 *I psF M+%Uζڢ>|L qQ?"(ïu5Qo3hޭfnp.[G̀$sڠCВL`ԭ}){JZ}2Ds3,HK+8gur4X2\szM?̶죛]_lkw-V؎$LL1Ty&"n4qx*\_R<xN:tGj$9;a' BH/(!{HzOt;*}H4E)rYk_6"L28,` 6.#TmOq8%~tW' I+om9?. g0b7^;bMIkB4gG' Zѐ~uhonM$ɟLRDi>*qN)F^QM?W/eI_F}g˛yfvCju'S^{ x5𺪁7d4n ̰D .47\&)eT"ImdZEMR,/T:яRjpTokw'M `y{7-[sUdVu0 -UU_- `2iVL:a eN1I!8(@ P㋕Ĝ^j^qSRj1YbWng86l}kz#;WML?J;!LVMٖ)Nw3'>{]Ph@{%3Q#(1Gtr)Q+ ܹ J[ws'@!J/1 JpZ@IZДj-ap"H.1!Vfv^"v6Sn)oϝ)\~ma0U$ˋřv24 P s`~0\iSBg4qhYshɣa B@h:J>L̴-P$ %vsWD|Z pCNCwr%~foB#qp7|A/^^{zy]jBXi%*G:0MsCHrUйMJ194ҖIVg!6( Ѵ`fӻjJ8)|$oS AO)W {rp3_ gyL{R4d2! ¥YB& L92yʹ 6$MI"M,ZPoۢ5NhQ"/)9mZ+@щ?{^O#O|a6HP$]]&og Svu3`ۏC+k$[;J"(rcd|JZT|ۖteÓ--OF55zKmxf;rs~e&d5,U6 ׂ {_now[n|6'}ߓ <ܻv!s0G;t sp>J]Drp><)9qذ |Y3xh1-O޽lt3v]um}{̵~gd}fRP'p}Ae}߃R&2L ʸ N[1Ly{1>Nw8ѫ˧r'xtRbs 4Q q,V$Ɍ?uܒ5˱ S`:`JuD+!zs[ NFBI`a쁐ԮFD'BŒk#C&N,__A>S`VÑ \j#%G|t`31 } 0ҫWpښJ/&.6*wuou|wq&7Q57,OZ*Dъ7!ʞ`_.w MsDsl,|^uS@-nkC ەO^.K[L~vf`jdyFpl1]~Ey=oa0 q9W]{i=| +$]<%aȍlj(*mH3h5)uƩDM:cQꧦQ?t:-AL_}k[?;R/Q-ͿxhQ@3RqUxU\0AP~iO5]rru>L5zzCx~R*󰌗 |@sg=wmmJgϑ t*?fTNfiRڑe$;)۔DIHPPLOĉ _ׯ_U# y&ʦN]&ލ2!n-ud:-Ȼ%Rnz,䝛h+:nrFV.M9WPPړ8z.,䝛6eKqŗr'}Y9a4<ͯw۵nxe&EͣsNo쎂n[t21!H*m6ee F 9b)QtS@h\o9N FB 4#,R[rAHB1d uND (fbNGIX-GY:||kڮd2F;y8vэ8M5`cpO@z5Z{lr\gt=/H.#5IGj}!:̲ c4۔Apz]XHFaQ7v߬6iZ{~Lw <W3*Ҟ/DvNkvG(B_SD!d0[_HnO"kCSO  O ɽ `T'S7ZyiyԎa\%:O^%t$1>&HĂr.̘D E%rID)&Vqʙ9di.83#D@!hUJ* *G[HЗ7Dp Jߟ~o!ly &/,Ztc}6ڴ[Q퍅ȭ.Vi:m r+R]e_|J>̘r9:{(pP)'U_WGm Nea+PY@9 5T ܜ{Ti'ͻ Ty:![ٟƁhmAN*6{\|" l`&Ru_u\uo·D\z@RY74y |)fzi¬>=n4^ 3.6YJnĿidJ4[NiVHgbgjBv$n֫?~te~F6kkƨkò̺ẅ́V<#y8f ڈ*@0*{cGH-,f5:lߖ e!l1ɿ|UqcHldt (DĩЉJQƽF 8SHjMSMLHk; fxv?Ppo;\eG'4`L70̢sl}ɏe<ǒM%nptEPhO./r0 @o/9,Z#cGDQsm^nC.hLj ^ӂw?Ԇ.RBp -(L1(j^dl,Sm1ѡIG0+Sٛɽb5 ˩[{%BjMs81CN86d%LjT&PT;9 ͌`DʹQ)*HIE4(2w&cɈ'3Y)EQBoV .=K%߿?ʤJlϧ.*Ů[_Oe(h`B?呷t,lUSP^/ͮQ>SIYѯ̼an#BGUwW̫~x}O_kw;2qۿ mԜ4Z}]Z4+pl !GA׺lfe;V<0RJk(ʄ͸O}QoCȖ )\vc+WV/&ǥY/?Ҧo~bsheۼR,(n {VrckP}bF:\K]JOi5Z^G2ӂjZc >{[Y:DӫU2lQ#Y/tm%.F1ŏJFoMeV/E^Zu{?CмTj5MS>4ahGx'/ka-FAhX()W6j{U7U nFFGQT&TWN7YoڣA씺-"9(iLeS7aBԫ@[3r!¬oW\IIxD}Q0MqtN{/G N)iNֽY/F' Պ{Ek~1Z'!P_- u=!_:fj j=?$%7׎81A  rb~لWtAx}Ѻw;>5DuN^WPu^іtC Cwz(T,}f=\<;D˽IMvx]a"Ym(V() Y\[ exKR`vGj/ -`1q3Xl^{dFɠI{0Ҟrs`2&bp.(TݿΆrh]t?ӑN0&Өwȴ]FRD͈43A4&CصHyx:Gj)#T4@: ^I ț7}N;%Kh ۚwD}>;?){~A6XrOd݁B?=aXΰq"8A+DOTD] 3`TCtKHF~/aC4'ȇ݉0 R)dt<$%Te©:,fNT c/%Q126]`èc'蘚feu~Nlfظ 0y$"J&9pE%^?^|d$wt+| 6(em+{NqKXp 逸AUao,g8 Q32jD@KXr{\fs۝X395ǜ"T{Nf3Gӝ3mٺwp脀Bae.q|%‡=,OWh <$=Q_JfJWb0v98]q"\ }Z !%Aq#$22.Ј6hOPNO!`t|jQ9oq/>;0I|JPҙ mu\)tXf6Ul5r޵V6Ѡ{lf -t|UyzM➃ 柊n-W 𮞖}d1=.s pFh(R4g މ0b|Kr+%T&HU:KtWq}p}rKq5H[շtjmДm0C$[7k9)d+n&ƗFW_Q]L6[X%n77/Eo!'c vų>ϓi9ܼnqX5b &KOymҼ-EVx"tYʫx-piTv32!lY-4鐱n\5طDž-"!¨tiz|nfE[:߯ \^&x^?c3ˏhyoUo{!Ķy~e&lX5O7>0rt/.3ӿ~}ɶp#8c[Ef\$NGha$)WY:_J*˫k/~xD#<%dK"w)ǖR& ɚw =)y~wHGMd$Ou$t?H{6 v߰BԞ!aDy *DyfFa:8~a @#w$Bzf:f&M =OB:G)a}kf>m/[#0N:Il>s#I2gѨo8=5m[܆I,eF)cLV4ɊW:rLj_HmtB+`+ӱmg&]YavV­j7+F| +=$ݎԺB5?X%Z)7+ERXi)VJK5f+D+ul(aے`nJىkDZA( ƈ04sGHRcD%@hlb Ia$Ԍ+Nb.HJ$q*bKLt%#|$QoITBj͸>u4RdLHIm,D$d"$a|XJSm!ܤ.P97?VH*/ُ (WV.q}-^bl+u[_J(JkMვ^"uRE>AX)R7+-ֈe[lڶZVzHm/l+ZV#~!"0]JǬ9Yf1TV*R Ce[)p7+"W JY)˶R ]1/ bT:bVJ/JrRV22fVj Dw.J#s$TwRkC>e[)enVJYgn[jEذzl+f Li]kCi5S&RؤꖛI:SEA\<,ݎzH tǧTj?j5+b6{c7m4{|!hmC=C9Dr߁KyH1^EC_hY=gٻFndWJa\_`1,d2wM=ȲLf-[V?$ۚj!bkDž-zb1i=M?n4ZIYɻf0*܄_}pբVs6M[tn):QiQB61M'쭤>Ω}J+ a}&lrD-'J#Ǹd>7v.ly|kZsyt(xDF( ukhXL?>gwU*J7ZEtɾtmƍqHbkqLyigK^]ϕDӏk}؇i ?M&#ڽ9)^ܑJX\SoWW>(Gܰ*V˹0K]{?5#sS0z<|?5)hqM8͇㬊A]VT!Mʷ=\ph{(݇d2AI8Yt:;2 l.y=eTU5:C*Wn͆`F>%y6~ 4ۊ? w]B8lfDV-UvnLXkЌ|ؠL|#=\ n /RGR|-雜aӱi`$p&Q#gFA:9gxMn=udbAwdf6}8\G^9iM݅/66d7uK7ԮަhEpDc|~<Ⱦ-n_۴0:ܪ{86L`Sg==4˞1I=f5K1"l,oBDLvj}GYQq/Ucc/ /11hWԍvEUXǬ[tcr\=c#e6^w09_VV"1en=[q-]m" V& C vC(Bk04L|Ѽn|Ѭ*xש<tVBg.5ګ 3j|0|5E6lRύB4s"bUzWFQq̄^GaFhQC'ksq F #kYdzR) 62+=%ڛL+-2*s##,Bq]J3cd-pVQ@JvǷ+JRPUj"ZR^lSN5a k3;A–ed:+~)^l`C`73>&HHƵπQ,Po倆*%  e_0"":˾l}aTSbŒ**4G聯Jk T#<9#K s_Y~lry6}f8J!0,zрE s}AFYLz ?ϋ({c/ `2˼a r'(/?{8lz_i~٨&@TR}oҖ`}mǕ61M52juHIIp=RC}#$,u#ƃu3%?kL#;qQRgEsgKqhpX< !\c;:>xVGpV`BQͫN+NB=c yZG*Q8 Ggh(%ah +X`^BUhZ9™9g9ŭfZ"8IHy|y/2 gNBpK'/[Z>vnUm(.>)˜W,^eR%Vf^f=6Ӑvg$*l0T>V /Cc)OSm` ˅PYa#(i-ÜY 8"=sŮk?QDc\ {C`=Bh ;l jd'oWw_F}m H0q{o b͑\ \|7]nn[z|פxg3u؂Vd+sb<#*2*9ex,[\}sW~FJ1uN^|9_l` %#08^nl+ >?}Y~70[Dڀt/X /jeRLdTxlSE&y EUTij@YF+,4֬j*|Vksu5&Zg>@%`gu;{Wma͙iP괕BP3"^cKSXGԌG>ŀ>s"R6Ïo#Ӓf4*ZEp& ]2eʦaaVPKd4uQUWofD*Z>&w7\oXXsfid2+/wg0x|yvEjG[[VxdzGH7g𒰈7;8Ъ`I]翬bJ#G!)B/Y*e@yJ@*(Z+Ȱ[Qd0:BdyTP#Z*r6T7ߣ-a6ަbsީ΅svUL;GpaëuHHิ\h(}&CDl(M;-hTzZDPj+>M/8T n7f7dwOț?Es{0.?pϛ>/(<ȀA -Xs0Xn$6d>>o )`L}T[9\Q]tTP"% : jZ|-2qA$rZgv' nŧ"\ĪtpWNa kr'skڎ c+X:"֩{i)j@Ae)TC]WR{[+<[j%QeUORB>Bh5B$#R(ĸ<)΢¨(|tbt,n(*YCcݝ]Z T> Wgd/yt=8A-jewo6q~bg{7dt7}a&A8Y\q|wU,nCO_y=8-Gɮ]0: W\tOy s/؄YYߤ.F³|%* fC#:cJ~ߚvDcnu1:uۨgtEڭ~|_vkb!ZkLQznuqhD mTnsZwj߮!|=&CWUgV˘Duyl)D$T͠Re$fh-&{0ڀQ%TAXĶȊF^Et3F+w ɍOƼaSH.Fqϟ %-t6tZ05c.!a)R,#{?.-e=6`$ 07Kao ߈?П%rako".ͻaj&6_;ԅ J{Hؘ>.Xu(YJ<펒N.տ94%3Z;?)tb-h+ضaZDݹy'hiipMCG3SXS0.ҚR䲖Ym`eAJ##zs]$ֻ۾߿KeMX"+D4y, {cB|GPKE35[+ƴ4E C (RLuI(r6>wZ$9\H-(RLjxꮽF8ڐ'}U͎F+S!_U'I-W;JhGhSL NmWKm])U s[Vfc$u-H(1;!.> fTgXSl9?#a}4+T禈(4MpgBy/$B]3 yf:#9O0&P&,閛"*CKi$-uPaD"1$!\M)2fhdK˅ K.$6 _ȠTXCLGRS`޵8n#bKYs~f&$'A&ɾDQFܶvdb!%w[vBM7&n"}bX򄑌9x6l/`66.FC:$}!A f"j8`T*!YB0OU$4Fr&J&JƘZ<N&5ްg2譖Lt*B]mj 9&Jf-Kn.ʊ$ ){%!5!\cBrn#GK)9{ bE6JDcE\c N^ꮌm#B+c$R2nJLX`k|ž\3~aVI4M4)[Qv_W+(E_8 j*^M;*bk& ~5JɋP3U3Gֻ~Ċ /bY׹Kc cCJX(rR;KI W3WUT~& ݤBV9E| +Xz>;dt|M䃬ƨDq{Mcl`30DwF&) %'kSiN:*ZSLQi2nZ?,%1Y(>IBۨ??ügDϼgMH+RT Lr[(>F֟rJ;)ݚW.A2U vd9kC16h}S]lOքrݓ)PQJȊR}*U56n'g٘,Fv*qdo[EaDYM$5"e2t3E$XraԘ)ƌn1V"sec)JvEA ZStqf1q阢o&>i|KB[@< ^h `1 JWZ s!qNZVwˡs6õD'y%jm u:X^icАPWO.;&ex>^T 7в.7)8uxXGPy}~AcG+أnܫl/M"]zq% um@vʠ5@wW+DHy16R^Fh(:|֧PF3%Sz#H-iQ8bi=9Y65OZcM,Hޡ3sR+ TTT4RQ RӔ={8E s) O)y'pgb_hHv tPsMLr%ҷj0N[EfVѡ9)n0N[[񽝌 L)9[ Um.OD"RL ɀ̄rD%NbF b0$bǂP&8dO6tw޸l>=Ndw̳nuFKH*XYիS(띒+RzPRrMɱRIQOR 9-&ix!2/Lp&q{_͍fq g-#@ىMل[۰lGu'7cŅyAvWWO,Dl5I&w-O5{l;G8m)mAH4-ɝK~;#~>X:2 YtnY[{ (h`{ IQeϷ}a_*5E%QLIt;5.ECtdge=iۇ^Rv,J6^L`Ưi4t4::4crHZe쪧^l#~>g0<ݏM_uulT{TˮOZ%De D45/6iY:e2[LVܕ4g-7`ֿ-_G5Tc؆I>pLj6ew9abG''HvKdmN)km 9v]r)@r1!> Rx3kJbK?>D.$Q,AL)'ZjdP:51 eLݫ, *ɗ UJHRbNib3ֱJvxB(Yy"I@ 4K@DxX|t3壻3GqAS_U(l 9J~7x-EO߿=~EWi_:7͕ De-7d7׷Wl\70Nf~S')H ojb DMM雓Sv[}JW_f3.7Cs"A)R' a,!Z _(\}OShsڰ0B8ceΞ[qؾK$Zl3@Į$`(s>;\XnɅld/T/ѽP%nԐ!z;v (Q կg;,aq" KVHjА-"Xf 3"3&R$ tVG/=ٙg("T8=_Ds7%UD( tS2zO׃i,]`r2]]cj}fSg-9 qobNc1 O^(J6?U~?o,<=@$fjXW9lv0\t% /!T{ DS~(/8z,E\h{ " `@d Kَ`Id7;5pVC*",6?,.H* U1]nzЯJ,U d!iJK W*ړaE"\[z5D乜 FPuҡ\ǂS ж;G#=OhG܄}z=|,doNŀ8 J*]_WydOwn4]Q QCs?=z2% -QVOtN߶yj鄫(s~9:5.AT.U08 ݓ WD6K6٨k)&TՠY\G!*8-L.ZKOa]BmU rcA"@%f('7$0_\Tq -52B(`q F`C P LRZ ( Q9r\5܏YmcuV{q; b-)WiQBTUS]WS%.[JuM=Ӣ=#@ix(p.|XSkk._vrP6:2IJY47v7۟`E*^&);YN/ӷt[~. 9C*٭D;)Wxd4j jdC$b4%(cǠ9gJ5R9vuWSMK TD)B"R?R2D)F3]|/XHf0HW-StLT Y]$(s8sjM 9>,3ƶ珯]~]RQᙵʂ0& ==-X3Ww:G䙹c;#x0h h$4Z;NPNt#YU4(1MNr Ȃ"IGGC0$~[(2['E$"j.OxtDcH6G5F׌+ٟS __ PL|t(_Rz*Pv3>6Y=з쌹2 QbiA>X?hA4O=+@RW8_AdMN?zwlQfm WimCO>NS}24{e=4"M@m֐(`jFeh<9bD{p1jZlK6#8nopWl$ǓMO K\ l] 5WCy`{2zϗf?uS.;7mE9Bځ*3ϐ%7VP1A: $g]T)#ϑçϟ?wƖwh(5}Vcb01]wj'w$I򄫍?C3 oÚEmd.vp^Ӫ4\zjk0ÇFxWW7a*#n61WC1%)m;@>TM9Yͳkк&{K5g#2fo YIm9 }&]3lrp4Zq|F@.fre+rئ ?+bՠ;)j‘DU?Žswl%~L_뭊*Yf->mm5VF7X4ݎKJޤO&FwIG9#k`Az ]"3EJj_Ay>3C} E)?) !<'̧`qrt=&SXL]1Jaj׮AyԱ &wLi?q}y*e$π8 (PDdq!4J1='4 Ա4%d$.{OM2Y-xiObbao t߇Oۏ,k[ҞN LeKo(#PŁLuOR/x E닿ޣ`a?꫰ͱv q,We7ttj^ Gƞi}ktԎB6?-q+Ҳ[._]x՟v/oә7zwzS}V%^G]|zy__xf]?4/]ܷ7.:Akef~ 7եme*v,2]ws~'ޝtGޗ[ \AߢD_*Z[Ƌw7NjI؝IWZuLЗn6?Aʿ yw콣_'in1{f H͋:{ESJ_%Мz6:Që 6mn&7ҝRQVa8ߚGTH0n91]MR$Ц-H+XVn S X3Im`="r /I]~ئQmi=yGS"4'qhލD(p&8 z87sYρC#oItս<Ղf5(=GGXХџTG J2RBI27Z~x3F) =A@n{yڦgRPv:*NBLERLP]JiRLPj7 7J sC)14PJJ-\F7J)wC)1SRKw=3JE:%~v2 t7ս'TKRPlTIx(E jӠ`( ڥLғ@)fn(TsDD$+JpCZRN3iw:QTT IQ J!2(BJwQS- JQj?B9/NzOdʤD&{ݍr*2S<ܭXN$!q}ڄEr\=m$öOhk鳀D!yVve<;_\:f`|gUYyCD#8? "NA\ 鴦s:qeL/A2nƂ#)¼:(*1.Յd1aCCu#9Jmוa,۱I-̑^WK 7*!8gH3FB#)t*TARӏS_P]R,D"[OwWlxin~+֙s{@8J4$rS7 I&W Eؑd 5$ByP! Obzf%e0 !>🰑8t$ZR+J "i]U#CjW;u]TT ,gR)5N9#b4ľzN> b2IHF % #ƥCɨy=d3G6Cս)MupPG  XV̵ u Im="9B*` ;2"%Dԭ vx<:Kԭ N*I LlxrX J8H Ǝ/6/LnĔA<1(hٿE <-F<>C|:fVGkWӇ4L]M]0!jXwj<@ܲz{`G$[o%{Ғ1 `?'-WΆ֝VTKl05 +&"i}׏gǶՅyK6U95ڛfnƾ]D HGv 0xfRq]hx+n2,gp,qY+2!Y{5dم+{M_?6/^UO͉7&|7QA"H98S%|uks4;?Joe+yoOo0-KH VJ{6]%%W 4Hy/2߮v) jD:˰@L۬'M,Bl wrTi ?3`sgm{ IŚ-wӎ:RgbjeG>x*ŒO(#ɏQ(v6&Mx|7He%鍞drL&w@pRiljѐ9t-[Jmj 2͔ B)d329 .~\z DbrUlIJ5\p|pID[qނB듩c0g<^GTXȘuUpnTDR+f\U Mh!Gi` y&TmpS 6s$u80 /k悦ܚOE<)`"eaB1{C< aٻm$Wf6 ,l?k>H]u*==%3*)QW]R ~A2R%aѶ=}$ܩgvxeQ0 =REKD@d||3c%JxmBsW u"zjdzV\x> +-_Z( 45i1!5.Yd[jz]=;S!)03!M1Qօ0FaHÑײҾ,eS; *.Ys"Fֲ@ ?mws0Zj-\R\up2j!ڕwq "xNIAG+X5V&TxpЈ|c_գ-^9KMJ\[Jr۩ĥP,vZ_BDI-^jd(qv2$d^c7 9)m5rZ$5n3`4 ʫ^y,\)~nф}E-_; 9BoWnӁ'U>c`û6?  ҦmؽOKL Wwgo5]}IV†$,jD1e DX4Bg\HV@ۉyy0=uk`{l{@ OĤ%FXjE֙(˪ +9(I&%)glS*֝- z!#0Kf6ҷ<mg-k/Ֆ 4ml ff<C2 ѧq1 npX?4A`y1ӭ0ԌH 0Nİ֎ٓԒkYk9li[ۿn m_=4 6ӣR#~c`f( R+b~?a_)΂8gᘛn:b4| Vp|\nC wkZ-IY2SgW4D9Т+Fc#cerC Kط3 i6Itx-%kOգk}k+lěš f_&dԭeE+pS{<A3RtD%%{՞٠c:)#OJr^{/t5ݥ6B29iL]Dw%M-zr i /jBTN!Gs ȩ[2ӠZlTV*iYؗUxtoeyH V/a7p[L ǾH<}{Ep6yg~;{{b}lGR[=S~Bf2 BfZYJA5Zi?#*۩TR`S3xbDիJ4~ҋᪿ2ndm_|kʣRփQLly87z݇J|=q9/>7́+S.L(0T"xrlN2+?xP*o:=cg e 1kN$yJxd|jς6h0mLH5h+5=۫D5 k40۞J#A'jrH,Vaya0c9w&e.$'U8ҢpRR@kЎ$AXAYŭ廥4LD~^JBcX!QL0fVR(֖PƂ"[1@cEt VX+1vh'nAV6H$B(ܘ0CE:K `LHKZ(faH:KB+0Vp_'fQ:>Y ,ȯhPȊ8kЅg)L%TSe*X}%m2PDNV60W3pB( Ke fmv_BR{lWR`HIrHcZ[V(4b~5\Q|mɔ@>K& D*7hAu?;6a*ctcF%d7h؈Q Asfl*HtĤrl6>lFgOu'Qi$% ovڴlޔ[6@",`A^89[ $hFg1۟&%B<o$(NKuǎٚ,5!b׎RW(G0A>->"9𚅘Ac9BK%/7CQiA.#inڶA#ui-eN2zt-.#4$ogU26bD]n(ͽAzҟ nԣտbR,vo=n>. {xQaj^.?km_2 :{ꇡ+_(,M>f wWwէp_ԒG qƾ8vX/b~"bTwvWAWۧ hݼfzx׿ǸO8r~< Sy _q'_8npCobN+)iʃ;oޝ6fޏ ]D6]^cV/!{wQ#ݑA2F>tþԤ5B.)}sцap\i:45B~9ʧhΜ\>{`݄YbVVNGnetV=̐W>E3ds*XX؄-6mHai-<Ѻ1C^ݛS(d 32P5y-č'iqUj)IM=wL{; ܆=:6k,xy1gU5R߬nF`_KzB Ϙ3?nni3V*3xEowvͶ2-PL'}z4U6j'Gizmu5}:ɵGq1zoјZ2eW1yRT}ꊀ"$CQ}]g~JO!N7SC'^0s䱕$܏4v ާԌ9(Nﯟ\(xڽK󰑵A/eoE t{7#a@+c:$' @29(hx?- od9?^OOÑ1~it[XE7ܣ.hG}mc%c bֆI锅$R,,bdYT~Nh BLFmXĬVXʭY`eп> f.kpF?7Y06f5[$ CNn6 {O.3-rY%R&֔j'J6k?,M1tؗ]ZH[oڟnrԣ/Sta~T3+|/mDZyGT"*yJB%݇pD%,VcG N~2l&MW~;h@q8fO!)3}Qe8~] ]8ʗCӛ_/V\Q )((b:R*?+)傛ɐUVb3k_ \Mar%;P5-(O5lŷUփ~w{2#:5@A >zUmLb.U3gBV)p(DOp#@Z֑4z-++LA vj7S[ޤo<;?{v3]yA͛quhٻ6r$W(`?e-pWG,%$bK%Y,gزM*V{u5G] 3h%6G7pGE/T%/@Si7z].iIBtLK(У/umVU|ꘁ EՑ7f/IF]@C@78Hx6FlZ\=lnO~>5p/%s% GST5d'Sr,a}BDُ@Dɝ1kӚkaI8, K0RTCgR7~{O<ݚ=ngX^]rBUv9OGexevarl;]?FG+,($)r W}wBX o|8aq?\}=}ں-{gK[?~3'lnJڤsq˴i*wFNL6-W&xZ^Z]'pF\Qˆr*3WՕnʌvK{qYM4(Yl:.XR;)|u$:ӄ|;_o߾uql~(_,=>U-"-LR9e|Re%Z:+JYϋU/_t?Sp\yGx/ⷹ.w=y۝+_&tTZ ;xx;< :He'7=|V{JJZբv脘UTU d9(\=WgW=W9ࡉ\s~Hi=P6NHeʘTלEJ;~ظsҳB1.ٝG[;ǫֻF$R 0|b*gϨI@3 605ΠvgpEJU8TF_]\7S OrCPyEJGy~x+FVVy 4NV_{TP԰g'j/tf%\e`CtJa8$5Hj|*Q`;O|m4Xh*_?VU9V5(H칹:su!=GUuvBPa٪GDWOU-UTu:{\YZ3K/ggaוNg.jeZ&[A@{|8?q;R5-J9sǝIya@ezԡf`'MMfWJ:2_U-34apR/׹?rq 4V3#\Y Q!`%>DFlV[\9jy\)lj EAl3D7t0K3Jq~~Qz7cTDSU{ABqj . j| hY l2I 7[P^2BRzaN˲2 ZI4!Hn?}|G,h0#XfeQOdnp/|t@^ІsP'"X`z:G "jY9ޒ#i|zk4'V E`͖%vKRxNk"Ϭ2?"BDUu>@|تᮔ(R&dVw`݂ Y6Z݄).(T]l9(%8DEg/_IDG7[zT L l\7j3ن!|Lm`%+ޛQ`q*#匸mf8a2yrGnHu"A߃EE|%[ovkg{U+JH*<mq҂DD$RGi^ڏrV>O7ilLHd,2ňTb hP"_ Rg(ŭ:%eeFeB ܌)jD8RbNLZOyoMf4-K*hcb)cFpã:J)cySQj"5G,J8&=s*8Zm{?}*pQRŵ[|"Z%QT6rKA `YEy˳tM+0PR{a6-{iɡP@DF8mƋIU-ۗCEB}idwe1֊j!!G,IɴDofv) W]Æ8F5DpwbB<"mv[9C{PZ $h)ID05:gI+K p\O&$ν}R9\(bJ2zF2KYR9SFė{<.3uv-Me1Pl>puEFxEۑ9jҥr MF+^'hp4zE*H&kJ>r%ؒ7\H&NBsaD^Qx2nי²;"WYRg/_?7mz^gNv<_VMw_"3sUspĈlI^XLJ-O*TUWe2G}'>Q)zѢWy>vd^3޽}Q%KRY{ f}+ޝVWgTXގ4':Pc ]ǀSi+,5r֭4&Y NCƣOK"NZe1 AS sps =-%%ht- =6c]j!5,+˒!2%NbV6Y\.+yiKN5+v[߯jIGk7Dl5 ʦ JGŠ)bӄܗv(}W\@ls\=W8r3s917x(:gQ׍6p ,Dx-Z>34&ьnDUpp5Q渍 z-oy4J$ \-f6׈b*ג0U]h~^vad7#b; 2Im3$R]~1uZmN㳫9kN*}'H Y#voݵ|ni:$0>P\yr:DzXu>?p%>5*L(#J2p{kں\\kt Lgt _n_Pg0%p,^YbL]kSѧ (#XZ*݉a\x2gA.) J^g!biZ4qN%"Jj9z)+c7/)[ו;N{0pDANj&{D4Ӛ[NVfkpE{:#9uvjJ])Zy{('l6sECJ1us[a'U+wWlưŪeCۘV\Koon)}sɌe`B#WMR 3USGtȲJ΀otjv3f {vN we9Ol _e\xIffV&7/eA ^OHw?% ?ߟA-'H8Y7sj &C >WpjZ WԕG]dTԕNdj £-̃,'@kMި߇w^ _e~3DM4sW..]@UAF*iRStOŹ9W8h=xq֩,Z ~t]W"Ѳ@G0LL+cЫ9-d5D頷`+rΨ圱;1}1%d)=NLG#vsЏv?o|pI>X5i=BKXVeS\ec@{O鳶M'8VETuc@2P>ۋgtK2neRei1LoAP6i] R;g`2?PJ-놅=. ѥX 5xph0ԗ.Dz,{츂U]OdmcEM ܘȜ[LL(gڍcj\1mu;wz`SWeS#pK i?>,J„.xRr] q4XFAqwoj2`#,9Jwq B/ɸcxcS=:^F}1a @IZ+]JL+Jnb'i9&]ٕ|)Dd `y[VxQAiɵ4]7# `6mD`w~p RYD, *H5!a< } y+`zJ s-/pnw%97(?Y*[k.-q(Σz>9֮K6킟Z1{UBMбo.xY^IAm} oK౮Ι_lQ359".5 EGG-xgogn§2t;)} .3s57"r-ɜP%s`l.t6ÏL"ޘhٕ+(`GO2 ȋL $UgNU~{olwK\TVT f!LW\+;AǗ6ꠃ"ǫSqZD+11,#VdDl2Ӆa`/Ё>]/ Fzk_uSSRMV5ެ\R՝'-l|*M؜/]<т iLTczg.;Fje953)%a8eWVn(РF SbLhEx+Т)֢Bt&8=w,۵&z=^TUběLuxsUZ*oq]{.s4O!mqwRO ^4#j`V+j)dD'(TTo |yg[{7Rkb0%)O1}=PT^$KV@wWnwGyolZO:/x2g6UDW/1RkĮWx{;O駻^qOcTMΩudKcm ]d1,޴o˾BvV~O|9R˞0f#U.ìd2=a õ6Vx~3~@g' :]x1f&[gu2;QgJj}ߗ@si6`w3%TLW#BXT-a^!JpV**K8N =O\<>#wgw *!af[u.i 82CY f@;UXn䃃oK0Rz RBh{: I_)፳~/?B]K}P&|8H7YQWB(85Kvez[",$=zB<@m;9,B'tD|Zg eaA,/}[֣~apc=Nf,x#RwHbHBTV(1YIc_rՆΆ|9vlj3r+z0+Tl .&NTFfN%4 >ۿC_ ҷ̌i/[KvN' a~͛C󿁇_۵Rw! 4:+=xrǗv[7i N=f0;})F_]=2"=,iwtP޾1ܲq7_f'aPnek2i,CRI2I=8YN=aun NYM vh_ J's/FS񝹿E-`i/5J[ ]} fitHyU X<3*_ǼYЕN^;2fH%h;z)g6 U"4HWnv0tTC7iC+?\3 _!^5\@*LDT-h nRP+EDFûF H"Kql>e$:K탗f7 :71sIYާO:̓st7KP8RoͶN=q8c85) cfy 2ytFv{q(AZ|iCEF&og0Q2A ̵\֣ pjS.aqc-F Xp'c(RKA!F"9(~$*DmCnjuFUD8@N%kmHdfK x- II2/W)$YjReZ9 )Rl_ݫ^> d$R$a2W"~h倡%`)6.{΀Y=^±߮q9T\89no쳔-ݢ;w<ʻҏbtYXـAaHVS"Jz$kv$GgwaY=hf RJk8qC: Wq?`7&bޚ D"E91@ %s$eKFZK9oOt:$^J~(c涔GOb9VCK!@[BryFfjz(=sqU })PH;Tj*T[0 gnd]h7m4KXSdAy($=Jk, 47CL9Y(}(ہLE&Myz6[zvz:5咟>e)dAzsNxf}Ŭ|j B1gGM?NgyU_1]ȘKi'3|5X䰺9 mHfaKXؗ,|qu}1+pPЭ4wC%5<5dzv RQtl|K3H[oQTkBhjP[FYb12ҟd(5^_fɽT' Z͙㵋8ĔIELkDin ےV#]^AP٣DG BP_Ck%ZPx#^(0~sB\oIP\XUX2%1A&$ !D#}_\؋w-\+fuxG)r,/|JQBtn% {6^,!ZDhd$,?/ysBrHwAk,[}.uG}Q p+V])Uh[he\뻁S0 ]0$-unZU}! Jx*RC.:l F:g`۶}>jpr7nP=jvn6URldu ym-,vtH뢷FMV/7?b#[@ryN8p8Yڊ6 @ mU`g5UZEwO,$) YBQc`߿xPS=b,;BLKSi -rg=P o۝ۑޢB^-jJsA^.Pxkr`'dnb>0Ԋ3V`L`:9LoZmڃ%䖍sp;[0\8Ơr`kDZsӚyz,\81h&pi s:k[+pc{l=Fryv[;Y V"TrjZB5({5Je4ՎVNgn HS8 M 1x) B/w/ }(`kkC`;W0[5Mql0, 59c2ӲxU `Z[}wra RwHǺj^&l]w<+@hx1s|mr\Kn~OCϾuD_ɖ 6v)O]QJRJ[VEKU~\zy8駢ptH`﯏hlT@"Ѓ`g^B\BAK BO'ף-e:9G-g*@AXtb[)d\Nݚ D"WMq[=|.#r-$`cM5h򡆉ϙFBo߄Y$d!qe0V0E{Z1Au(`!Y(}(ہLE&=3bN 3s*%8JQBV-!`Q\A;o^g--4>zshE xvmĜQ͕dʈ_߾=Mn&1> diwѯpq;>dOvfpwf0X$s,"IOOvO\rYct0 2dWdT䟾϶ˏkLd'sWaN>ns'-Ȓ8/8ALuDO"xu~RHO?]*HսIO.Ii_S(EQ5NrbFNѿK kOoۼm"9cfiRآ9[H0cNYXDY/9XXwv9۝%Rv\hպltG֖JxpWkFF_ ;F >(?]7DziEǴ?PhkGO$F¸I:jaĦ٢b0~Z >[ak lef cY*Y-b0ƞ=tYZ% IvMu{㺤F3-ٴO4:m0ΕPե->Wꈉ ']G{"DE%cQ;v( s$|YQۜAؐ0lMkv8{sk }2,'BUf1;ۓWh_'?~퇍42}5}65 n/i"DF(n.y^B!*9ItA19LFʰo&0-V?$G]Y[zd+"C?zx4oFW"O7ݣi,ZT9Aebpm}-I˾r|.Ѐ*ك/5YsIBxEvoC u׶ 0/M4D\O8~2y -5ۦBryg~mx{`W7?R2Z $W@[^ HwF_{" ֻ> `1/>_:g%x,a@ 4Stn4KLi$/pERL}(GgK`)Ȏ”Kg |v}bH KtK;(R}=[[OGzʉ^ Hǩ/\EF<یk ӥc,=/&a ΡWqk}%$wYNbˁexvjSFH s:qڌe:/QNkSH^vva*Y~w_3Dngs-Szw}qhq!Vn.(ۛۛo-bw2Hpf_yn;A鸕xͿ> '"Z C 76xu]ڟXI52;{kA4vãQ9: A2'Uz`^BLxp5)v1 ltHlSs:L'"p|B.R导卢l>y$/ĉ aIH9eW{#RPeC10k(p\QPIRA6p;bM>hVXQ1ܟW˽ 5ZccyD+bbdfքxfAi5VQ#ռmӔIbJ AlZCb [*؄XḤRX#{1ib  gOJV,ļ!#."c=Bi !W`*Y1{2阺d's)A&[{nlG9ZH ̓@Sϝ:7@ԧE y";%TQQBJx[WLH*|Z ,JӚduoD2Ll%u:;xIoy3(q-AJTo&<ì*Z9|9S@7=EYxpcH搰7n;C}b}e\v7t]g{y;hTFD%P_޼k~P_ǥ?^_G-/au\6[u좾a:qLJ;&TnƴKs>+Q$䅋hLi1vӍpDVA蔾#G8i, EHmj1i7J1Hwh#Zj:Vފ(Q !!/\D[22+Ÿex92D逄Kj6QvW:t<3kTb%%͙-g{T7"%ND@֨8U8AςŐ*>x, J gQ 3 3iFG V ZZԕ!S9Z C- aj(VY2w!MiFnz'Bn&6s& &l (=[ ɐ,ΔRM@T˽ߓ+"QrڄmI.%R~OJIutfdua 4XRv6Xfc]&–ZfK΀jFXֱ|B$*q6;A:MXJRӕ<~h|q Ω)>቎9á%{JԓL==rTlN@}nZ-JipE6]vZ|TwiIOV-ɴ觿N.>=91POGؘ->˲̂@ұ&Lݔ"CȓC&s4P(_,V;Sq)L@)Q>$=cўTk^v4uPeGv ?^M_33iڳ"XQѾC),C.h))v{́c炖%CH "bA[))S6!O$QFű[yXvBB^T#9vӠJ1Hwh#Z2] vyD6p-S@#؉J(TӛJ0$ N>ϝ "q+Au3xʝǨ>kQ(j|9$Ii|n%Jf=W'hr0 Yj)#[Yk*^J\a՟jI=d$a@zXe Vh PQZ+ʪ >1SG5K9DS}Gx'HR* ZCJMuiK)iRt^O%>FYj.=m)$MJ9g'rRgm4/J)JiR*xR8M5Bù*'I1hT$lQL%u/#e˘t5sHg}VJ1X 2 ^C :$fa`s!t7O͏n/_'6ZT$J5.i>;/])-e"rEb9idz“7j}ULA?]姭n~-Y7%>3(PTJjFS?zߝ :xS+Vu'+`Í'22i&TU1J /0R=J! T +CFe_ Ed=} gn~ƙ+QK-YXMZ͂qKgV[,RY/@! !"C$r*;rY 0(u:V.1ʡJ4SOF﮿ҊOePUPq%Šک!3+`~8͍ UY0g07jY Jj1Y_?HIa)ѨO{U h mFJއ^0_%Fr\|}V1ULz8~K ,1[Nd#Z7Tߤ)+ FT~ 1s1`#ɷGfcYICYY8Ιٌϭd;۱ƿ}GGZTԶwdWTէcSau6:~)l/DMF w/>!^߻ .l'LK(9$2lo~o]x/b-lr@)c2% X,iw-b '`I%`}$9ّI}κ%|PTEM}(z| {F 2aUSE+0W28i_mr?L,z>+Q hq%RnǾ J(iC_E MWȇk]7E.맼\ ]^_~_f:SitIwK,12Nդ9K F*7Ѫie(0){% I3HAםJbS;NYSbŃcn:+g8Ca9zҤqiW ˁc%Y뗛`~:i.00d)8\x=8B9BW0K "G6Y"lEQ"1EUm5glE sPXԮF/km #.V>EQ`QP5ꍃޡ!$D -Crf>l|(,hS K591qqTtOf6v[!|{&tN i! Ksg@Z. "PWE،)Dj#EЩ5#CHRaw1U $7:4z/14(8J=v2`9֬}FAHIcTK.R ]iK.IBlNF i> m amw=tm/f, 9l&u`cP zZ8<˷K4HTK@KѶRu%E _o0{y)5uǔ89Jγ?fy=Y, N|5>lSF9RmNtˆ%Ʀ؇?IGj?h"ɧ廛?ĝN)mEC_lҌhⴭOɧ\~AtKPw~ ;hkءg`w$K1>:WQecʈɞx1_/{\ڀ'ns^Wjd`=|/0|`j**O˽ ;u%ْ/[Mq|ȓuo&pkiӢ?M۫q>V7ה/M0'v3arܢcA5o߷@]Ю76(/M8|xxuZ.bR˅VaVh$côUb4ve`lw^ /ajK7?Osf8ex!Dcag8 =t!u!b(|ZX8c}D+X,J^ JV \*x,t*7oy2#J[+VuHBq2*Dʸrjx߾do͚var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005071667615134451164017723 0ustar rootrootJan 22 15:24:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 15:24:12 crc restorecon[4698]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:12 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 15:24:13 crc restorecon[4698]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 15:24:13 crc kubenswrapper[4825]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 15:24:13 crc kubenswrapper[4825]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 15:24:13 crc kubenswrapper[4825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 15:24:13 crc kubenswrapper[4825]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 15:24:13 crc kubenswrapper[4825]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 15:24:13 crc kubenswrapper[4825]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.349718 4825 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352651 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352675 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352688 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352692 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352696 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352700 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352704 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352707 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352711 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352715 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352720 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352724 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352728 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352732 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352736 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352739 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352743 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352748 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352754 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352759 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352763 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352767 4825 feature_gate.go:330] unrecognized feature gate: Example Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352771 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352775 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352779 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352782 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352786 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352789 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352793 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352796 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352800 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352803 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352806 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352811 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352816 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352819 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352824 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352828 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352832 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352836 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352840 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352844 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352847 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352851 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352855 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352858 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352862 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352865 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352868 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352872 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352875 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352880 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352884 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352890 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352896 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352901 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352905 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352910 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352914 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352919 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352923 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352927 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352933 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352936 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352939 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352943 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352946 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352950 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352953 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352957 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.352960 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353260 4825 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353277 4825 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353288 4825 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353296 4825 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353303 4825 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353308 4825 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353316 4825 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353323 4825 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353328 4825 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353333 4825 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353339 4825 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353344 4825 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353350 4825 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353355 4825 flags.go:64] FLAG: --cgroup-root="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353363 4825 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353369 4825 flags.go:64] FLAG: --client-ca-file="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353373 4825 flags.go:64] FLAG: --cloud-config="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353378 4825 flags.go:64] FLAG: --cloud-provider="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353383 4825 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353391 4825 flags.go:64] FLAG: --cluster-domain="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353396 4825 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353402 4825 flags.go:64] FLAG: --config-dir="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353408 4825 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353415 4825 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353424 4825 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353429 4825 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353434 4825 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353441 4825 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353447 4825 flags.go:64] FLAG: --contention-profiling="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353452 4825 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353457 4825 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353462 4825 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353468 4825 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353475 4825 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353480 4825 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353486 4825 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353491 4825 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353496 4825 flags.go:64] FLAG: --enable-server="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353504 4825 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353511 4825 flags.go:64] FLAG: --event-burst="100" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353517 4825 flags.go:64] FLAG: --event-qps="50" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353522 4825 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353527 4825 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353533 4825 flags.go:64] FLAG: --eviction-hard="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353540 4825 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353545 4825 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353550 4825 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353556 4825 flags.go:64] FLAG: --eviction-soft="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353560 4825 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353565 4825 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353571 4825 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353576 4825 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353580 4825 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353584 4825 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353589 4825 flags.go:64] FLAG: --feature-gates="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353594 4825 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353599 4825 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353604 4825 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353608 4825 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353613 4825 flags.go:64] FLAG: --healthz-port="10248" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353617 4825 flags.go:64] FLAG: --help="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353622 4825 flags.go:64] FLAG: --hostname-override="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353626 4825 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353631 4825 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353635 4825 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353639 4825 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353643 4825 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353647 4825 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353651 4825 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353655 4825 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353659 4825 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353663 4825 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353668 4825 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353672 4825 flags.go:64] FLAG: --kube-reserved="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353677 4825 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353681 4825 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353685 4825 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353690 4825 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353695 4825 flags.go:64] FLAG: --lock-file="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353702 4825 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353706 4825 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353711 4825 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353718 4825 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353723 4825 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353727 4825 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353731 4825 flags.go:64] FLAG: --logging-format="text" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353736 4825 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353740 4825 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353744 4825 flags.go:64] FLAG: --manifest-url="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353748 4825 flags.go:64] FLAG: --manifest-url-header="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353754 4825 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353758 4825 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353764 4825 flags.go:64] FLAG: --max-pods="110" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353768 4825 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353773 4825 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353777 4825 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353781 4825 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353785 4825 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353790 4825 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353794 4825 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353808 4825 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353812 4825 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.353817 4825 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354047 4825 flags.go:64] FLAG: --pod-cidr="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354055 4825 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354063 4825 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354068 4825 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354073 4825 flags.go:64] FLAG: --pods-per-core="0" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354078 4825 flags.go:64] FLAG: --port="10250" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354082 4825 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354087 4825 flags.go:64] FLAG: --provider-id="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354092 4825 flags.go:64] FLAG: --qos-reserved="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354097 4825 flags.go:64] FLAG: --read-only-port="10255" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354101 4825 flags.go:64] FLAG: --register-node="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354105 4825 flags.go:64] FLAG: --register-schedulable="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354109 4825 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354118 4825 flags.go:64] FLAG: --registry-burst="10" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354122 4825 flags.go:64] FLAG: --registry-qps="5" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354126 4825 flags.go:64] FLAG: --reserved-cpus="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354131 4825 flags.go:64] FLAG: --reserved-memory="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354136 4825 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354141 4825 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354145 4825 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354149 4825 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354153 4825 flags.go:64] FLAG: --runonce="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354157 4825 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354161 4825 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354166 4825 flags.go:64] FLAG: --seccomp-default="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354170 4825 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354175 4825 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354179 4825 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354184 4825 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354188 4825 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354193 4825 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354197 4825 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354202 4825 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354206 4825 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354211 4825 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354215 4825 flags.go:64] FLAG: --system-cgroups="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354220 4825 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354227 4825 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354231 4825 flags.go:64] FLAG: --tls-cert-file="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354235 4825 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354243 4825 flags.go:64] FLAG: --tls-min-version="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354247 4825 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354251 4825 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354256 4825 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354260 4825 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354264 4825 flags.go:64] FLAG: --v="2" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354271 4825 flags.go:64] FLAG: --version="false" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354278 4825 flags.go:64] FLAG: --vmodule="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354284 4825 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354288 4825 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354406 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354415 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354420 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354423 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354427 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354431 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354434 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354438 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354441 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354445 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354448 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354452 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354455 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354459 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354462 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354466 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354470 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354473 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354477 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354481 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354484 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354488 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354497 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354501 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354504 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354507 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354511 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354515 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354518 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354522 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354526 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354529 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354533 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354537 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354540 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354544 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354548 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354552 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354560 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354571 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354577 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354582 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354587 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354593 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354598 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354603 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354607 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354612 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354617 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354622 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354626 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354630 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354635 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354639 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354648 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354652 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354656 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354660 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354665 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354669 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354673 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354678 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354682 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354686 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354691 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354697 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354701 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354705 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354711 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354715 4825 feature_gate.go:330] unrecognized feature gate: Example Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.354720 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.354735 4825 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.368378 4825 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.368442 4825 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368579 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368596 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368607 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368617 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368626 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368634 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368642 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368650 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368657 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368665 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368676 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368688 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368697 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368706 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368715 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368724 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368732 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368741 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368749 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368758 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368767 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368775 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368784 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368793 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368801 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368811 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368820 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368830 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368840 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368848 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368857 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368864 4825 feature_gate.go:330] unrecognized feature gate: Example Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368872 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368880 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368888 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368896 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368904 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368911 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368919 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368927 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368935 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368943 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368951 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368958 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368966 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.368974 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369004 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369012 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369020 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369028 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369035 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369043 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369051 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369059 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369067 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369075 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369087 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369097 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369105 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369113 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369121 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369128 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369136 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369144 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369152 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369160 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369167 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369201 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369210 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369219 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369226 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.369240 4825 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369465 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369479 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369488 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369496 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369504 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369512 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369520 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369529 4825 feature_gate.go:330] unrecognized feature gate: Example Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369537 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369545 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369553 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369561 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369569 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369579 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369586 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369594 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369602 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369610 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369619 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369627 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369635 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369643 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369651 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369659 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369666 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369677 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369687 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369697 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369708 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369717 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369729 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369738 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369746 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369755 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369762 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369770 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369778 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369786 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369794 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369802 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369810 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369820 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369831 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369842 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369852 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369861 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369869 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369877 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369885 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369894 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369903 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369911 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369919 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369926 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369935 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369943 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369951 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369959 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369966 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.369975 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370016 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370025 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370032 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370040 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370048 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370056 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370064 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370072 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370079 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370087 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.370095 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.370108 4825 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.370747 4825 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.375555 4825 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.375705 4825 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.376622 4825 server.go:997] "Starting client certificate rotation" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.376661 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.377049 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 02:36:31.227082023 +0000 UTC Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.377237 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.383950 4825 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.385566 4825 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.386989 4825 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.399908 4825 log.go:25] "Validated CRI v1 runtime API" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.424293 4825 log.go:25] "Validated CRI v1 image API" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.426366 4825 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.429064 4825 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-15-19-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.429097 4825 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.444511 4825 manager.go:217] Machine: {Timestamp:2026-01-22 15:24:13.442424986 +0000 UTC m=+0.203951966 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8d0c9c57-c027-4cfc-93dd-2f319dfeea10 BootID:63828c1b-c3c3-4e3c-af40-4df88d9bdc0c Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ec:ad:2c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ec:ad:2c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c1:c6:45 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:86:cc:5f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e5:ef:d3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:90:5e:85 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:8c:6c:8d:be:65 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8e:52:73:a0:05:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.445026 4825 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.445354 4825 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.445707 4825 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.445871 4825 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.445912 4825 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.446113 4825 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.446124 4825 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.446338 4825 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.446374 4825 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.446652 4825 state_mem.go:36] "Initialized new in-memory state store" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.446729 4825 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.447671 4825 kubelet.go:418] "Attempting to sync node with API server" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.447690 4825 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.447710 4825 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.447727 4825 kubelet.go:324] "Adding apiserver pod source" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.447742 4825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.449377 4825 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.449708 4825 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.449774 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.449830 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.449800 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.450187 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450354 4825 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450860 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450889 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450899 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450908 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450921 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450929 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450938 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450950 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450961 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.450971 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.451004 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.451013 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.451188 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.451536 4825 server.go:1280] "Started kubelet" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.452234 4825 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.452265 4825 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.452939 4825 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 15:24:13 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.453364 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.454709 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.454776 4825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.454809 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:34:56.13497898 +0000 UTC Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.454847 4825 server.go:460] "Adding debug handlers to kubelet server" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.454966 4825 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.454533 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d16f11cbc74b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 15:24:13.451515062 +0000 UTC m=+0.213041992,LastTimestamp:2026-01-22 15:24:13.451515062 +0000 UTC m=+0.213041992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.455051 4825 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.455088 4825 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.455074 4825 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.455240 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.455703 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.455756 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.457422 4825 factory.go:55] Registering systemd factory Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.457445 4825 factory.go:221] Registration of the systemd container factory successfully Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.457943 4825 factory.go:153] Registering CRI-O factory Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.457959 4825 factory.go:221] Registration of the crio container factory successfully Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.458062 4825 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.458099 4825 factory.go:103] Registering Raw factory Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.458132 4825 manager.go:1196] Started watching for new ooms in manager Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.460892 4825 manager.go:319] Starting recovery of all containers Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468142 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468234 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468250 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468263 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468275 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468287 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468297 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468308 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468323 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468336 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468348 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468360 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468372 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468386 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468397 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468410 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468424 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468438 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468450 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468462 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468473 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468484 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468497 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468510 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468523 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468534 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468548 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468560 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468572 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468583 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468595 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468613 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468626 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468638 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468649 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468660 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468673 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468687 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468704 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468718 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468730 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468742 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468754 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468765 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468778 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468790 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468800 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468812 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468851 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468864 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468877 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468893 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468929 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468942 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.468958 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469046 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469062 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469075 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469087 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469099 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469111 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469123 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469134 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469148 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469162 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469175 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469187 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469199 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469239 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469255 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469267 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469279 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469292 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469303 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469314 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469325 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469336 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469348 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469360 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469372 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469386 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469397 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469408 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469420 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469431 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469441 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469454 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469469 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469483 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469497 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469512 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469525 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469545 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469560 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469572 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469586 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469599 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469611 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469624 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469639 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469654 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469666 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469679 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469693 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469711 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469725 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469740 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469753 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469766 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469779 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469793 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469807 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469823 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469836 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469850 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469862 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469875 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469888 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469900 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469913 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469927 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469942 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469955 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469967 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.469999 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470012 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470023 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470035 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470049 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470061 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470073 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470086 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470098 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470110 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470122 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470134 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470146 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470158 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470170 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470182 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470194 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470207 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470220 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470231 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470246 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470258 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470271 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470282 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470296 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470309 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470321 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470335 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470348 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470359 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470370 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470382 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470395 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470406 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470419 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470432 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470446 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470458 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470471 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470484 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470497 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470509 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470521 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470535 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470549 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470561 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470575 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470588 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470601 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470614 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470626 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470639 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470653 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470665 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470678 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470691 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470703 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470716 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470728 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470741 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470758 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470770 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470782 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470795 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470807 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470821 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470833 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470845 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470859 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470871 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470883 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470897 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470909 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470920 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470932 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470944 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470956 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470968 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.470996 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471030 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471043 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471674 4825 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471715 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471733 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471748 4825 reconstruct.go:97] "Volume reconstruction finished" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.471758 4825 reconciler.go:26] "Reconciler: start to sync state" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.488583 4825 manager.go:324] Recovery completed Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.498409 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.501062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.501270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.501348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.502525 4825 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.502558 4825 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.502583 4825 state_mem.go:36] "Initialized new in-memory state store" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.510286 4825 policy_none.go:49] "None policy: Start" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.512297 4825 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.512344 4825 state_mem.go:35] "Initializing new in-memory state store" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.513085 4825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.515661 4825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.515707 4825 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.515735 4825 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.515789 4825 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.516961 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.517416 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.555328 4825 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571059 4825 manager.go:334] "Starting Device Plugin manager" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571116 4825 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571128 4825 server.go:79] "Starting device plugin registration server" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571496 4825 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571514 4825 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571730 4825 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571811 4825 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.571825 4825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.582507 4825 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.615901 4825 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.616098 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.617444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.617481 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.617493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.617611 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.617948 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.618056 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622507 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622574 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622656 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622801 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622808 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.622945 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.624465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.624514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.624532 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.624718 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.624854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.624883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.625081 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.625170 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.625206 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626145 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626154 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626260 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626448 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626603 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626887 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.626896 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627026 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627054 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627449 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.627657 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.656247 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.671919 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.673201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.673246 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.673264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.673342 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.673929 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674358 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674384 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674405 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674420 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674438 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674452 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674468 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674485 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674501 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674514 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674528 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674587 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674679 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674738 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.674804 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.775828 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.775894 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.775918 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.775941 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.775957 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776006 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776030 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776050 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776049 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776109 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776144 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776118 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776179 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776167 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776228 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776202 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776070 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776154 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776490 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776558 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776623 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776724 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776815 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.776971 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.777061 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.777096 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.777342 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.777178 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.777481 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.777582 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.875120 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.876741 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.876791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.876809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.876841 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 15:24:13 crc kubenswrapper[4825]: E0122 15:24:13.877464 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.945593 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.954693 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.973285 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.976283 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-da08a59c7383898d62c5e8208e04f0dd103cd7769df06ac34fceae033216cecb WatchSource:0}: Error finding container da08a59c7383898d62c5e8208e04f0dd103cd7769df06ac34fceae033216cecb: Status 404 returned error can't find the container with id da08a59c7383898d62c5e8208e04f0dd103cd7769df06ac34fceae033216cecb Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.977573 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-aacd807ddb96a9c518c37d033d4a0f2ceae9247d3fb8b07e1da9ebcad4e090c5 WatchSource:0}: Error finding container aacd807ddb96a9c518c37d033d4a0f2ceae9247d3fb8b07e1da9ebcad4e090c5: Status 404 returned error can't find the container with id aacd807ddb96a9c518c37d033d4a0f2ceae9247d3fb8b07e1da9ebcad4e090c5 Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.987915 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:13 crc kubenswrapper[4825]: W0122 15:24:13.992318 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ea4d7dc830771b973200e53c176a26c3f0be6239540a530a3130bebc79026916 WatchSource:0}: Error finding container ea4d7dc830771b973200e53c176a26c3f0be6239540a530a3130bebc79026916: Status 404 returned error can't find the container with id ea4d7dc830771b973200e53c176a26c3f0be6239540a530a3130bebc79026916 Jan 22 15:24:13 crc kubenswrapper[4825]: I0122 15:24:13.995764 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 15:24:14 crc kubenswrapper[4825]: W0122 15:24:14.017853 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cb761e089eefdc4393e39532701c2fb745b68498374fc95f1dc4ebe16ed99cb0 WatchSource:0}: Error finding container cb761e089eefdc4393e39532701c2fb745b68498374fc95f1dc4ebe16ed99cb0: Status 404 returned error can't find the container with id cb761e089eefdc4393e39532701c2fb745b68498374fc95f1dc4ebe16ed99cb0 Jan 22 15:24:14 crc kubenswrapper[4825]: W0122 15:24:14.023653 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d8881764a9747b2a5d7337c439d831c4648e5d3d16e25af4555892d0ab451422 WatchSource:0}: Error finding container d8881764a9747b2a5d7337c439d831c4648e5d3d16e25af4555892d0ab451422: Status 404 returned error can't find the container with id d8881764a9747b2a5d7337c439d831c4648e5d3d16e25af4555892d0ab451422 Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.058118 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.278382 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.279533 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.279567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.279575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.279617 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.279954 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.454533 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.455746 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:03:06.016321496 +0000 UTC Jan 22 15:24:14 crc kubenswrapper[4825]: W0122 15:24:14.497159 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.497237 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.520509 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf" exitCode=0 Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.520586 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.520666 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb761e089eefdc4393e39532701c2fb745b68498374fc95f1dc4ebe16ed99cb0"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.520836 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.521973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.522041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.522053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.522280 4825 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="edbd73cf546d17782f4c06dfbe6084c22ace44d3a6cdf01039d7b4473c771db5" exitCode=0 Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.522335 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"edbd73cf546d17782f4c06dfbe6084c22ace44d3a6cdf01039d7b4473c771db5"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.522354 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ea4d7dc830771b973200e53c176a26c3f0be6239540a530a3130bebc79026916"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.522410 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.523202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.523220 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.523229 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.524068 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.524930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.524952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.524962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.525338 4825 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346" exitCode=0 Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.525385 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.525401 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"da08a59c7383898d62c5e8208e04f0dd103cd7769df06ac34fceae033216cecb"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.525456 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.526107 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.526121 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.526129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.527781 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.527804 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aacd807ddb96a9c518c37d033d4a0f2ceae9247d3fb8b07e1da9ebcad4e090c5"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529163 4825 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="925b518afc020307bae58595958195389a3144a6b7aaff2147592a72fbe158f4" exitCode=0 Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529187 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"925b518afc020307bae58595958195389a3144a6b7aaff2147592a72fbe158f4"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529201 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8881764a9747b2a5d7337c439d831c4648e5d3d16e25af4555892d0ab451422"} Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529314 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:14 crc kubenswrapper[4825]: I0122 15:24:14.529867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:14 crc kubenswrapper[4825]: W0122 15:24:14.647214 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.647289 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:14 crc kubenswrapper[4825]: W0122 15:24:14.784413 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.784513 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.859489 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Jan 22 15:24:14 crc kubenswrapper[4825]: W0122 15:24:14.920815 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.920958 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Jan 22 15:24:14 crc kubenswrapper[4825]: E0122 15:24:14.979833 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d16f11cbc74b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 15:24:13.451515062 +0000 UTC m=+0.213041992,LastTimestamp:2026-01-22 15:24:13.451515062 +0000 UTC m=+0.213041992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.080530 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.083106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.083135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.083164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.083184 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 15:24:15 crc kubenswrapper[4825]: E0122 15:24:15.083684 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.447529 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.456371 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:26:14.807206264 +0000 UTC Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.533503 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.533544 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.533558 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.533649 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.534405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.534433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.534445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.535002 4825 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9fc160b726b5a1e7d7a3a0a708960c434a52c3d5e32d7fcceaaaa5a895a2357b" exitCode=0 Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.535007 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9fc160b726b5a1e7d7a3a0a708960c434a52c3d5e32d7fcceaaaa5a895a2357b"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.535140 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.535795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.535838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.535852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.538440 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.538464 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.538477 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.538489 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.540011 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4c7bd4f05b8362d0e74900120afee1ec61a6cc125af950b4e7d4906836ad9f52"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.540084 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.540924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.540952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.540976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.542854 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.542880 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.542892 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60"} Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.543006 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.543662 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.543685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:15 crc kubenswrapper[4825]: I0122 15:24:15.543693 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.457329 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:38:24.793048215 +0000 UTC Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.547204 4825 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ecd5cf5db7a2886467cb09ddd797649686452fcc88f87d7a0d9526fff7aaf7c9" exitCode=0 Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.547267 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ecd5cf5db7a2886467cb09ddd797649686452fcc88f87d7a0d9526fff7aaf7c9"} Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.547371 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.548132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.548153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.548161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.550858 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.550856 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006"} Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.550895 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.550901 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.550964 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551815 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.551956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.552013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.552029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.684492 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.685741 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.685821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.685844 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:16 crc kubenswrapper[4825]: I0122 15:24:16.685880 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.022626 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.457609 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:45:19.726020661 +0000 UTC Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.518813 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.555872 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b07983f578726c3d36bda6d581f7b621bcfc1929e43620356c661a9a4cee835"} Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.555927 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94a97be1a14ca8ceb1e0ea30d527ec7e2bce4a71aa34de483daee25131d99ced"} Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.555941 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9000f78a65704a4e466dcbc80ccb83e25fd3a95f09e030159c71180c005c281a"} Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.555951 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"174bc6d11511b9de0bf2a9f278b736122763b5ed13595b345d16ac458eb07a7a"} Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.555955 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.556318 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.556769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.556796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.556807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.557141 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.557174 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:17 crc kubenswrapper[4825]: I0122 15:24:17.557189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.205295 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.205553 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.206863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.206898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.206909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.458447 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:40:51.252766475 +0000 UTC Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.542319 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.561821 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"08a692822c40c48ca2f324598a1c147c978002549782f3137ffa2d5e600edb39"} Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.561886 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.561944 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.561944 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.562996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563033 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563000 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.563197 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.805023 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 15:24:18 crc kubenswrapper[4825]: I0122 15:24:18.958517 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.458703 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:40:47.92002437 +0000 UTC Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.564541 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.564541 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.565666 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.565720 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.565733 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.565884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.565910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.565922 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.583734 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.583920 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.586729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.586771 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.586782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:19 crc kubenswrapper[4825]: I0122 15:24:19.589242 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.459075 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:25:00.884448649 +0000 UTC Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.566567 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.566580 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.567398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.567457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.567475 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.568777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.568821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.568838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.997784 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:20 crc kubenswrapper[4825]: I0122 15:24:20.997972 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.003477 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.004017 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.004050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.240041 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.459569 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:32:54.236748182 +0000 UTC Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.569457 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.570463 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.570501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:21 crc kubenswrapper[4825]: I0122 15:24:21.570542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:22 crc kubenswrapper[4825]: I0122 15:24:22.460623 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:28:05.287448299 +0000 UTC Jan 22 15:24:23 crc kubenswrapper[4825]: I0122 15:24:23.461305 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:30:56.455742049 +0000 UTC Jan 22 15:24:23 crc kubenswrapper[4825]: E0122 15:24:23.582892 4825 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 15:24:24 crc kubenswrapper[4825]: I0122 15:24:24.240287 4825 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 15:24:24 crc kubenswrapper[4825]: I0122 15:24:24.240386 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:24:24 crc kubenswrapper[4825]: I0122 15:24:24.462128 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:21:27.900511361 +0000 UTC Jan 22 15:24:25 crc kubenswrapper[4825]: E0122 15:24:25.449749 4825 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 15:24:25 crc kubenswrapper[4825]: I0122 15:24:25.454307 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 15:24:25 crc kubenswrapper[4825]: I0122 15:24:25.462559 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:26:50.459271752 +0000 UTC Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.018828 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.018915 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.034452 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.034531 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.161427 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.161716 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.163419 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.163472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.163483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.195233 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.462833 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:46:08.339217768 +0000 UTC Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.581247 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.582097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.582146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.582162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:26 crc kubenswrapper[4825]: I0122 15:24:26.595495 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 15:24:27 crc kubenswrapper[4825]: I0122 15:24:27.463699 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:55:59.977949991 +0000 UTC Jan 22 15:24:27 crc kubenswrapper[4825]: I0122 15:24:27.584295 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:27 crc kubenswrapper[4825]: I0122 15:24:27.585480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:27 crc kubenswrapper[4825]: I0122 15:24:27.585521 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:27 crc kubenswrapper[4825]: I0122 15:24:27.585530 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:28 crc kubenswrapper[4825]: I0122 15:24:28.464519 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:55:57.426445875 +0000 UTC Jan 22 15:24:28 crc kubenswrapper[4825]: I0122 15:24:28.547428 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:28 crc kubenswrapper[4825]: I0122 15:24:28.547643 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:28 crc kubenswrapper[4825]: I0122 15:24:28.548938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:28 crc kubenswrapper[4825]: I0122 15:24:28.549008 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:28 crc kubenswrapper[4825]: I0122 15:24:28.549024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:29 crc kubenswrapper[4825]: I0122 15:24:29.460647 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 15:24:29 crc kubenswrapper[4825]: I0122 15:24:29.465369 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:26:02.919330462 +0000 UTC Jan 22 15:24:29 crc kubenswrapper[4825]: I0122 15:24:29.478711 4825 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 15:24:30 crc kubenswrapper[4825]: I0122 15:24:30.466279 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:38:38.534785774 +0000 UTC Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.004468 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.004741 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.006340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.006397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.006421 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.011483 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.033786 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.036624 4825 trace.go:236] Trace[324910498]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 15:24:17.538) (total time: 13497ms): Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[324910498]: ---"Objects listed" error: 13497ms (15:24:31.036) Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[324910498]: [13.497865232s] [13.497865232s] END Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.036657 4825 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.037089 4825 trace.go:236] Trace[597616776]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 15:24:17.828) (total time: 13208ms): Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[597616776]: ---"Objects listed" error: 13208ms (15:24:31.036) Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[597616776]: [13.208168108s] [13.208168108s] END Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.037149 4825 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.038110 4825 trace.go:236] Trace[1089491279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 15:24:17.408) (total time: 13629ms): Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[1089491279]: ---"Objects listed" error: 13629ms (15:24:31.037) Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[1089491279]: [13.62989948s] [13.62989948s] END Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.038142 4825 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.038114 4825 trace.go:236] Trace[1002240047]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 15:24:16.720) (total time: 14317ms): Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[1002240047]: ---"Objects listed" error: 14317ms (15:24:31.037) Jan 22 15:24:31 crc kubenswrapper[4825]: Trace[1002240047]: [14.317288578s] [14.317288578s] END Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.038177 4825 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.039180 4825 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.040753 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.243422 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.246622 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.456996 4825 apiserver.go:52] "Watching apiserver" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460007 4825 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460229 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460571 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460607 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460700 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460735 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460764 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.460804 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.460867 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.460935 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.461031 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.463228 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.463480 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.463616 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.463753 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.464514 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.464562 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.464894 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.465163 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.465876 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.466349 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:36:50.822491281 +0000 UTC Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.490245 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55764->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.490274 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55778->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.490690 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55764->192.168.126.11:17697: read: connection reset by peer" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.490773 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55778->192.168.126.11:17697: read: connection reset by peer" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.491080 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.491125 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.505332 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.514140 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.523369 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.533617 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.542303 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.555480 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.556143 4825 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.566787 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.582533 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.590694 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.596504 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.596614 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.604377 4825 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.606240 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641245 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641298 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641331 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641368 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641402 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641436 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641468 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641499 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641530 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641563 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641605 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641636 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641665 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641685 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641694 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641718 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641724 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641759 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641778 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641828 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641844 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641862 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641867 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641877 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641894 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641909 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641890 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642028 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.641924 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642115 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642140 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642137 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642210 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642227 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642245 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642262 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642276 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642293 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642337 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642361 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642385 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642405 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642428 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642448 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642469 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642489 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642537 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642554 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642568 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642582 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642598 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642612 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642626 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642641 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642665 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642687 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642706 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642722 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642738 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642754 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642769 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642785 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642821 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642837 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642890 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642909 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642925 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642940 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642955 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642973 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643012 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643056 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643076 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643092 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643107 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643121 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643138 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643153 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643178 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643206 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643230 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643259 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643281 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643304 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643326 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643346 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643366 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643386 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643409 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643431 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643453 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643474 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643494 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643521 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643543 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643572 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643595 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643618 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643641 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643664 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643688 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643713 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643736 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643761 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643783 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643805 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643826 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643847 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643867 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643888 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643907 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643928 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643948 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643968 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644006 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644027 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644050 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644075 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644097 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644126 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644150 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644174 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644195 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644216 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644238 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644262 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644289 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644311 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644334 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644357 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644378 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644399 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644423 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644445 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644467 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644491 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644513 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644540 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644564 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644587 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644608 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644633 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644655 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644677 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644708 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644725 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644745 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644767 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644789 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644817 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644838 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644858 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644879 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644906 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645024 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645051 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645072 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645095 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645121 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645143 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645167 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645191 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645213 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645237 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645261 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645291 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645317 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645341 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645366 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645392 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645416 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645442 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645467 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645496 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645518 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645542 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645561 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645580 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645607 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645632 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645741 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645777 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645802 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645824 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645860 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645885 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645910 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645936 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645958 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646047 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646075 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646101 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646127 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646150 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646174 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646198 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646495 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646548 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646576 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646601 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647318 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647393 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647419 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647444 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647471 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647492 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647510 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647532 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647551 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647572 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647591 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647611 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647630 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647652 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647672 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647726 4825 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647740 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650750 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650775 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650794 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650806 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650816 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651474 4825 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642181 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642342 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642475 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642755 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.642830 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643398 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643402 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643626 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643656 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643734 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.643813 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644091 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.655932 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644096 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.655960 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644198 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644323 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644349 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644462 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644461 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644585 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.644888 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645110 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645222 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645277 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645323 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645464 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645730 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.645975 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646053 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646131 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.646973 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647147 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647158 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647234 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647298 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.656445 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.656495 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.656767 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.656767 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647312 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647609 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647625 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647562 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647645 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647681 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647702 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647827 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.647964 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648115 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648137 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648326 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648474 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648472 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648689 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648797 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.648840 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649105 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649125 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649138 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649384 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649399 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649521 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649599 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649601 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649634 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649657 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649694 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649800 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649842 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.649836 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650215 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650262 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650251 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650274 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650367 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650681 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650730 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650768 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.650853 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651057 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651104 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651210 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651262 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651261 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657317 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651493 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651517 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657336 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651722 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651810 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651884 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651914 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.651947 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652113 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652177 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.652256 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:24:32.15223524 +0000 UTC m=+18.913762240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652303 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652482 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652750 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652777 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.652829 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653155 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653199 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653235 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653443 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653874 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653910 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.653911 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.654300 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.654713 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.655087 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.655362 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.655391 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.656103 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657485 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657763 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657530 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657950 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.657976 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.659011 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.658872 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.659525 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.659926 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.660704 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.661514 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.662344 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663065 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663145 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663193 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663416 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663472 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663594 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663860 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.663958 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.664025 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.664832 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.664922 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.665222 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.665503 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.665520 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.665794 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666093 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666130 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.666188 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:32.166163284 +0000 UTC m=+18.927690204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666505 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666723 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666787 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666845 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666886 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.666929 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.667256 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.667329 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.667474 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.667598 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.667668 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668079 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668286 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668517 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668554 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668576 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668581 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668638 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668697 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668670 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.668793 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.668878 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:32.168851658 +0000 UTC m=+18.930378668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.670854 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.671770 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.671961 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.672270 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.672557 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.672583 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.672601 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.672743 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:32.172717269 +0000 UTC m=+18.934244189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.673303 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.673330 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.673344 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.673560 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.674022 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.679545 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.682336 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.682367 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.682383 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:31 crc kubenswrapper[4825]: E0122 15:24:31.682457 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:32.182433212 +0000 UTC m=+18.943960212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.682801 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.682867 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.683085 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.683291 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.684196 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.684333 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.685516 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.685722 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.686866 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.687211 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.687236 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.687513 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.689505 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.691665 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.697361 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.698830 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.707178 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.710758 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.714296 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.716303 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.724670 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.744798 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752095 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752140 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752194 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752206 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752214 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752223 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752231 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752239 4825 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752246 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752254 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752262 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752269 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752277 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752285 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752292 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752300 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752307 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752315 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752323 4825 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752331 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752339 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752348 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752356 4825 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752363 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752371 4825 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752378 4825 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752386 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752394 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752401 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752409 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752416 4825 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752424 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752431 4825 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752439 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752447 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752455 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752464 4825 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752473 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752481 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752491 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752498 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752507 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752515 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752523 4825 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752530 4825 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752539 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752546 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752553 4825 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752561 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752569 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752577 4825 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752586 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752594 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752601 4825 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752609 4825 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752618 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752625 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752632 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752639 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752646 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752655 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752673 4825 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752680 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752688 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752696 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752705 4825 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752713 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752721 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752729 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752736 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752744 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752751 4825 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752759 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752766 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752774 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752782 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752790 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752797 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752805 4825 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752813 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752803 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752822 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752901 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752919 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752946 4825 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.752966 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753018 4825 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753036 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753053 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753072 4825 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753089 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753106 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753125 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753144 4825 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753160 4825 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753179 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753195 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753211 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753227 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753244 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753261 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753278 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753294 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753311 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753329 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753348 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753368 4825 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753384 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753401 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753417 4825 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753434 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753451 4825 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753468 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753484 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753501 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753517 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753534 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753551 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753567 4825 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753583 4825 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753600 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753616 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753632 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753649 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753665 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753681 4825 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753698 4825 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753715 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753731 4825 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753747 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753764 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753780 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753797 4825 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753813 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753829 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753845 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753863 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753879 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753898 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753915 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753931 4825 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753948 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.753965 4825 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754006 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754023 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754040 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754057 4825 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754074 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754093 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754112 4825 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754129 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754145 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754162 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754178 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754194 4825 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754210 4825 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754226 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754244 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754261 4825 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754278 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754294 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754310 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754326 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754342 4825 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754408 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754426 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754444 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754462 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754493 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754511 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754528 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754545 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754562 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754583 4825 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754601 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754617 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754635 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754653 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754668 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754686 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754702 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754718 4825 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754734 4825 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754752 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754769 4825 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754784 4825 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754801 4825 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754817 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754834 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754851 4825 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.754910 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.785767 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.797194 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 15:24:31 crc kubenswrapper[4825]: I0122 15:24:31.803351 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 15:24:31 crc kubenswrapper[4825]: W0122 15:24:31.815002 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-cf513dcebac7bb23ac598a07aa5281f736c07fc2b5ca088fb788d947f3947a5d WatchSource:0}: Error finding container cf513dcebac7bb23ac598a07aa5281f736c07fc2b5ca088fb788d947f3947a5d: Status 404 returned error can't find the container with id cf513dcebac7bb23ac598a07aa5281f736c07fc2b5ca088fb788d947f3947a5d Jan 22 15:24:31 crc kubenswrapper[4825]: W0122 15:24:31.815833 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-91dc3211fb7970b659028faf36854882f4a77720fe3bddf62b8bbd1379b68bea WatchSource:0}: Error finding container 91dc3211fb7970b659028faf36854882f4a77720fe3bddf62b8bbd1379b68bea: Status 404 returned error can't find the container with id 91dc3211fb7970b659028faf36854882f4a77720fe3bddf62b8bbd1379b68bea Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.158608 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.158812 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:24:33.158778867 +0000 UTC m=+19.920305777 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.260415 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.260557 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.260654 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.260692 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.260925 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261032 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261064 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261082 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:33.261046077 +0000 UTC m=+20.022573027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261119 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261126 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261240 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:33.261211462 +0000 UTC m=+20.022738392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261143 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261361 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:33.261331246 +0000 UTC m=+20.022858196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261151 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261440 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:32 crc kubenswrapper[4825]: E0122 15:24:32.261505 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:33.261491831 +0000 UTC m=+20.023018841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.466929 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:20:23.461966685 +0000 UTC Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.599344 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.601308 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006" exitCode=255 Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.601384 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.602035 4825 scope.go:117] "RemoveContainer" containerID="4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.603965 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"91dc3211fb7970b659028faf36854882f4a77720fe3bddf62b8bbd1379b68bea"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.607155 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.607210 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.607229 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cf513dcebac7bb23ac598a07aa5281f736c07fc2b5ca088fb788d947f3947a5d"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.610166 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.610218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e1b4d633ceb9762ff33f06f5c6a9764d5e4b9ce657040f91faf76fd4c0645fde"} Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.621482 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.639528 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.659054 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.674879 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.686586 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.701074 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.717471 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.734155 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.746945 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.762003 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.775388 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.785340 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.794696 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.803786 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.816773 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:32 crc kubenswrapper[4825]: I0122 15:24:32.827438 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:32Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.168139 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.168304 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:24:35.168284159 +0000 UTC m=+21.929811069 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.269577 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.269629 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.269654 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.269679 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270035 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270051 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270063 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270117 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:35.270098145 +0000 UTC m=+22.031625045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270458 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270499 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:35.270489047 +0000 UTC m=+22.032015957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270558 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270572 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270582 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270620 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:35.27059952 +0000 UTC m=+22.032126430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270655 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.270680 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:35.270672313 +0000 UTC m=+22.032199223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.468087 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:35:06.384019664 +0000 UTC Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.516392 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.516517 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.516641 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.516752 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.516903 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:33 crc kubenswrapper[4825]: E0122 15:24:33.517049 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.521435 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.522484 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.524862 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.526308 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.528500 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.529888 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.530447 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.531439 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.533841 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.535363 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.537314 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.538159 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.539154 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.539903 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.540651 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.541323 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.542046 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.542743 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.543242 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.544021 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.544850 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.545447 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.545803 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.546164 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.546736 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.547576 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.548757 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.549501 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.550322 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.550809 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.551436 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.551863 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.552902 4825 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.553202 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.555173 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.555967 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.556534 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.559044 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.560325 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.561021 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.561536 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.562355 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.563191 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.564508 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.565260 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.566464 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.571238 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.571863 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.573457 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.574282 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.575629 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.576205 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.577053 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.580140 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.581712 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.583626 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.585816 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.594126 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.613396 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.614508 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01"} Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.615107 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.621685 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.634416 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.646886 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.656843 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.666021 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.676036 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.685644 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.696695 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.707645 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.721370 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.732742 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:33 crc kubenswrapper[4825]: I0122 15:24:33.744214 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.241601 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.243407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.243474 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.243493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.243573 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.250019 4825 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.250254 4825 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.251437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.251475 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.251484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.251499 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.251508 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: E0122 15:24:34.275318 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:34Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.278885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.278919 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.278929 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.278943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.278952 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: E0122 15:24:34.296862 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:34Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.300320 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.300343 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.300352 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.300364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.300375 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: E0122 15:24:34.315386 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:34Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.318426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.318453 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.318462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.318477 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.318486 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: E0122 15:24:34.329678 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:34Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.332957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.333001 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.333009 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.333025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.333036 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: E0122 15:24:34.346506 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:34Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:34 crc kubenswrapper[4825]: E0122 15:24:34.346681 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.348059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.348099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.348109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.348133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.348142 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.451151 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.451195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.451205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.451222 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.451232 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.468597 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:00:48.752727137 +0000 UTC Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.554178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.554216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.554227 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.554241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.554250 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.656394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.656435 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.656446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.656461 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.656472 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.758497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.758555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.758572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.758596 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.758612 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.861028 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.861069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.861080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.861095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.861106 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.962870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.962930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.962957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.962971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:34 crc kubenswrapper[4825]: I0122 15:24:34.962997 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:34Z","lastTransitionTime":"2026-01-22T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.065514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.065564 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.065576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.065592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.065606 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.168656 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.168748 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.168807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.168836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.168855 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.187449 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.187625 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:24:39.187601416 +0000 UTC m=+25.949128336 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.273773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.273820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.273833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.273849 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.273860 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288388 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.288413 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288497 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:39.28843173 +0000 UTC m=+26.049958650 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.288542 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.288600 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.288661 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288809 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288819 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288839 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288851 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288881 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:39.288858183 +0000 UTC m=+26.050385133 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288913 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:39.288896874 +0000 UTC m=+26.050423824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288923 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.288962 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.289023 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.289095 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:39.28906897 +0000 UTC m=+26.050595920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.376959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.377016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.377026 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.377041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.377050 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.468955 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:56:35.680249153 +0000 UTC Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.480071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.480100 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.480108 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.480122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.480131 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.516971 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.517012 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.517166 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.517017 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.517257 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:35 crc kubenswrapper[4825]: E0122 15:24:35.517345 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.583214 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.583249 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.583260 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.583274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.583285 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.620866 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.633357 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.646030 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.656786 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.677651 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.685964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.686035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.686047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.686065 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.686077 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.692512 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.707602 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.719467 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.735393 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.787894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.787933 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.787944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.787959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.787970 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.890721 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.890770 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.890781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.890801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.890814 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.993872 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.993932 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.993943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.993960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:35 crc kubenswrapper[4825]: I0122 15:24:35.993972 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:35Z","lastTransitionTime":"2026-01-22T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.097035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.097087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.097098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.097115 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.097127 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.199436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.199489 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.199504 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.199526 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.199541 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.302198 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.302241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.302254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.302279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.302292 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.405710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.405762 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.405770 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.405789 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.405800 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.469210 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:34:14.040773651 +0000 UTC Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.508512 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.508584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.508597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.508616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.508631 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.611395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.611470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.611495 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.611527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.611552 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.714380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.714443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.714459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.714484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.714501 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.816967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.817018 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.817026 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.817039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.817047 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.919586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.919856 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.919864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.919877 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:36 crc kubenswrapper[4825]: I0122 15:24:36.919885 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:36Z","lastTransitionTime":"2026-01-22T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.022885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.023063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.023098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.023127 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.023148 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.126067 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.126129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.126141 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.126157 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.126168 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.228892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.229300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.229500 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.229701 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.230138 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.333462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.333501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.333512 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.333531 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.333551 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.435372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.435622 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.435722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.435828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.435912 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.469853 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:13:48.910735691 +0000 UTC Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.508113 4825 csr.go:261] certificate signing request csr-2dfwv is approved, waiting to be issued Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.516649 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.516676 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:37 crc kubenswrapper[4825]: E0122 15:24:37.516795 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.516660 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:37 crc kubenswrapper[4825]: E0122 15:24:37.516905 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:37 crc kubenswrapper[4825]: E0122 15:24:37.517162 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.522914 4825 csr.go:257] certificate signing request csr-2dfwv is issued Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.538626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.538673 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.538685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.538740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.538767 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.640371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.640404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.640412 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.640424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.640433 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.742714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.742761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.742772 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.742790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.742803 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.844457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.844713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.844774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.844847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.844920 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.947090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.947342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.947425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.947515 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:37 crc kubenswrapper[4825]: I0122 15:24:37.947588 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:37Z","lastTransitionTime":"2026-01-22T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.050031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.050255 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.050320 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.050385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.050446 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.153282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.153325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.153333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.153346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.153356 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.255473 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.255706 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.255802 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.255881 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.256168 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.343499 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8jk65"] Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.344018 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5bzgc"] Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.344044 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.344641 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ljkjt"] Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.345165 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.345271 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.348134 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.348269 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.348712 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.349484 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.349601 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.349644 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.349807 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.349876 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.349998 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.350545 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.363746 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k9wpt"] Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.364442 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.365587 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.365706 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.365809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.365912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.366044 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.366345 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.366488 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.366771 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.368677 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.371782 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.379042 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.391492 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.400570 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.410252 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.420240 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.429531 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.440856 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.463855 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.468467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.468517 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.468527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.468540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.468548 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.470325 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:55:50.671347565 +0000 UTC Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.481832 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.498185 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.514165 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.517971 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-socket-dir-parent\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518025 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-os-release\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518044 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cf928ed7-f98c-4ced-b3d7-cb4700d3a906-hosts-file\") pod \"node-resolver-8jk65\" (UID: \"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\") " pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518061 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-cni-multus\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518076 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-kubelet\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518089 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-conf-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518103 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cnibin\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518127 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518191 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-cnibin\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518234 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518254 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-os-release\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518272 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zzf\" (UniqueName: \"kubernetes.io/projected/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-kube-api-access-v6zzf\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518287 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-netns\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518301 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/049abb37-810d-475f-b042-bceb43e81dd5-multus-daemon-config\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518333 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2ds\" (UniqueName: \"kubernetes.io/projected/cf928ed7-f98c-4ced-b3d7-cb4700d3a906-kube-api-access-vt2ds\") pod \"node-resolver-8jk65\" (UID: \"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\") " pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518353 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6015ae-d193-4854-9861-dc4384510fdb-mcd-auth-proxy-config\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518367 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-cni-bin\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518381 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-hostroot\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518394 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-system-cni-dir\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518411 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8gg\" (UniqueName: \"kubernetes.io/projected/1d6015ae-d193-4854-9861-dc4384510fdb-kube-api-access-xq8gg\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518440 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-etc-kubernetes\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518466 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-cni-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518493 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-multus-certs\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518522 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-k8s-cni-cncf-io\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518541 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-system-cni-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518562 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d6015ae-d193-4854-9861-dc4384510fdb-proxy-tls\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518577 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d6015ae-d193-4854-9861-dc4384510fdb-rootfs\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518603 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cni-binary-copy\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518637 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/049abb37-810d-475f-b042-bceb43e81dd5-cni-binary-copy\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.518665 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldrdz\" (UniqueName: \"kubernetes.io/projected/049abb37-810d-475f-b042-bceb43e81dd5-kube-api-access-ldrdz\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.524038 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 15:19:37 +0000 UTC, rotation deadline is 2026-12-12 03:55:57.732188683 +0000 UTC Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.524274 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7764h31m19.207920229s for next certificate rotation Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.530682 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.540585 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.551572 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.560407 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.570272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.570300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.570309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.570324 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.570334 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.571140 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.581490 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.593681 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.604583 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.616320 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619330 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-socket-dir-parent\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619354 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-os-release\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619371 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cf928ed7-f98c-4ced-b3d7-cb4700d3a906-hosts-file\") pod \"node-resolver-8jk65\" (UID: \"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\") " pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619387 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-cni-multus\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619407 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-kubelet\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619422 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-conf-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619437 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cnibin\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619452 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-cnibin\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-socket-dir-parent\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619524 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-os-release\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619559 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cf928ed7-f98c-4ced-b3d7-cb4700d3a906-hosts-file\") pod \"node-resolver-8jk65\" (UID: \"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\") " pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619579 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-cni-multus\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619598 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-kubelet\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619617 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-conf-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619647 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cnibin\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619676 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-cnibin\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619465 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619713 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-os-release\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619743 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zzf\" (UniqueName: \"kubernetes.io/projected/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-kube-api-access-v6zzf\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619764 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2ds\" (UniqueName: \"kubernetes.io/projected/cf928ed7-f98c-4ced-b3d7-cb4700d3a906-kube-api-access-vt2ds\") pod \"node-resolver-8jk65\" (UID: \"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\") " pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619779 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-netns\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619794 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/049abb37-810d-475f-b042-bceb43e81dd5-multus-daemon-config\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619809 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6015ae-d193-4854-9861-dc4384510fdb-mcd-auth-proxy-config\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619814 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619825 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-cni-bin\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619829 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-os-release\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619843 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-hostroot\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619858 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-system-cni-dir\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619886 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-etc-kubernetes\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619900 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8gg\" (UniqueName: \"kubernetes.io/projected/1d6015ae-d193-4854-9861-dc4384510fdb-kube-api-access-xq8gg\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619917 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-cni-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619932 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-multus-certs\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619946 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-k8s-cni-cncf-io\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619960 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-system-cni-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.619992 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d6015ae-d193-4854-9861-dc4384510fdb-proxy-tls\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620007 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/049abb37-810d-475f-b042-bceb43e81dd5-cni-binary-copy\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620021 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldrdz\" (UniqueName: \"kubernetes.io/projected/049abb37-810d-475f-b042-bceb43e81dd5-kube-api-access-ldrdz\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620037 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d6015ae-d193-4854-9861-dc4384510fdb-rootfs\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620053 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cni-binary-copy\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620548 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cni-binary-copy\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620578 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620600 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/049abb37-810d-475f-b042-bceb43e81dd5-multus-daemon-config\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620658 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-netns\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620691 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-multus-certs\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620721 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-var-lib-cni-bin\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-hostroot\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620778 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-system-cni-dir\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.620808 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-etc-kubernetes\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.621020 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6015ae-d193-4854-9861-dc4384510fdb-mcd-auth-proxy-config\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.621157 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-multus-cni-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.621267 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-host-run-k8s-cni-cncf-io\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.621296 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d6015ae-d193-4854-9861-dc4384510fdb-rootfs\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.621330 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/049abb37-810d-475f-b042-bceb43e81dd5-system-cni-dir\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.621395 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/049abb37-810d-475f-b042-bceb43e81dd5-cni-binary-copy\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.625369 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d6015ae-d193-4854-9861-dc4384510fdb-proxy-tls\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.633691 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.638550 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2ds\" (UniqueName: \"kubernetes.io/projected/cf928ed7-f98c-4ced-b3d7-cb4700d3a906-kube-api-access-vt2ds\") pod \"node-resolver-8jk65\" (UID: \"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\") " pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.639807 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zzf\" (UniqueName: \"kubernetes.io/projected/85f26f27-4ca0-42df-a11b-fa27e42eb3c7-kube-api-access-v6zzf\") pod \"multus-additional-cni-plugins-5bzgc\" (UID: \"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\") " pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.640451 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8gg\" (UniqueName: \"kubernetes.io/projected/1d6015ae-d193-4854-9861-dc4384510fdb-kube-api-access-xq8gg\") pod \"machine-config-daemon-k9wpt\" (UID: \"1d6015ae-d193-4854-9861-dc4384510fdb\") " pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.642420 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldrdz\" (UniqueName: \"kubernetes.io/projected/049abb37-810d-475f-b042-bceb43e81dd5-kube-api-access-ldrdz\") pod \"multus-ljkjt\" (UID: \"049abb37-810d-475f-b042-bceb43e81dd5\") " pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.672417 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8jk65" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.672702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.672741 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.672752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.672787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.672799 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.677725 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" Jan 22 15:24:38 crc kubenswrapper[4825]: W0122 15:24:38.683504 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf928ed7_f98c_4ced_b3d7_cb4700d3a906.slice/crio-fc7ecbd5a82b78d1e1a6c17509f3aa2d42e0393c2b02472d3f204ea7aaf0ff30 WatchSource:0}: Error finding container fc7ecbd5a82b78d1e1a6c17509f3aa2d42e0393c2b02472d3f204ea7aaf0ff30: Status 404 returned error can't find the container with id fc7ecbd5a82b78d1e1a6c17509f3aa2d42e0393c2b02472d3f204ea7aaf0ff30 Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.684759 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ljkjt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.688874 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:24:38 crc kubenswrapper[4825]: W0122 15:24:38.695755 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049abb37_810d_475f_b042_bceb43e81dd5.slice/crio-d1088f8fa479a73480ac269f336c0a644cbc55ed51e827ff2d64cf6a075b9db0 WatchSource:0}: Error finding container d1088f8fa479a73480ac269f336c0a644cbc55ed51e827ff2d64cf6a075b9db0: Status 404 returned error can't find the container with id d1088f8fa479a73480ac269f336c0a644cbc55ed51e827ff2d64cf6a075b9db0 Jan 22 15:24:38 crc kubenswrapper[4825]: W0122 15:24:38.712196 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d6015ae_d193_4854_9861_dc4384510fdb.slice/crio-1bd22b5918f932c306fd69bad2fe64eafb64e955aeef1df5267f4a0294557952 WatchSource:0}: Error finding container 1bd22b5918f932c306fd69bad2fe64eafb64e955aeef1df5267f4a0294557952: Status 404 returned error can't find the container with id 1bd22b5918f932c306fd69bad2fe64eafb64e955aeef1df5267f4a0294557952 Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.716384 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c8f2b"] Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.717297 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.719364 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.719938 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.720205 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.720764 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.720854 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.720766 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.720960 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.735114 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.746971 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.759193 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.771679 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.775207 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.775244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.775252 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.775268 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.775277 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.791861 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.805558 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.814434 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821648 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-systemd-units\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821688 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-netns\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821708 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821727 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-ovn\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821745 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-config\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821768 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovn-node-metrics-cert\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821800 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-systemd\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821820 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-slash\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821871 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-kubelet\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821893 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-var-lib-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821918 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-node-log\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821940 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-etc-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.821971 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-script-lib\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822018 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-netd\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822038 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2tb\" (UniqueName: \"kubernetes.io/projected/a2a796f1-0c22-4a59-a525-e426ecf221bc-kube-api-access-mm2tb\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822057 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822083 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-env-overrides\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822128 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822146 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-bin\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.822165 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-log-socket\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.825571 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.842278 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.854228 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.865122 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.878614 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.878649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.878660 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.878928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.878944 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.879179 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.889195 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:38Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.923725 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-etc-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.923810 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-script-lib\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.923874 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-etc-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.923952 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-netd\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924050 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2tb\" (UniqueName: \"kubernetes.io/projected/a2a796f1-0c22-4a59-a525-e426ecf221bc-kube-api-access-mm2tb\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924073 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924116 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-env-overrides\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924147 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924168 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-bin\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924175 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924143 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-netd\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924206 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924220 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-log-socket\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924187 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-log-socket\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924244 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-bin\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924300 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-systemd-units\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924324 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-netns\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924343 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924362 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-ovn\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924401 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-config\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924420 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovn-node-metrics-cert\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924443 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-systemd\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924448 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-netns\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924466 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-slash\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924497 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-slash\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924503 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-kubelet\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924530 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-var-lib-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924551 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-node-log\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924581 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.925215 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-env-overrides\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.924573 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-systemd\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.925445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-config\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.927715 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-kubelet\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.927731 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-node-log\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.927760 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-ovn\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.927788 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-var-lib-openvswitch\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.927764 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-systemd-units\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.928503 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-script-lib\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.930443 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovn-node-metrics-cert\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.943020 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2tb\" (UniqueName: \"kubernetes.io/projected/a2a796f1-0c22-4a59-a525-e426ecf221bc-kube-api-access-mm2tb\") pod \"ovnkube-node-c8f2b\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.981641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.981673 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.981682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.981695 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:38 crc kubenswrapper[4825]: I0122 15:24:38.981704 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:38Z","lastTransitionTime":"2026-01-22T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.038401 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:39 crc kubenswrapper[4825]: W0122 15:24:39.061630 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2a796f1_0c22_4a59_a525_e426ecf221bc.slice/crio-4ff920ac3bcec2a0f6c60e684001728a74092e9ad118eebab33e48b7caafb953 WatchSource:0}: Error finding container 4ff920ac3bcec2a0f6c60e684001728a74092e9ad118eebab33e48b7caafb953: Status 404 returned error can't find the container with id 4ff920ac3bcec2a0f6c60e684001728a74092e9ad118eebab33e48b7caafb953 Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.083943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.084008 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.084020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.084039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.084051 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.186649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.186698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.186714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.186745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.186755 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.229415 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.229548 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:24:47.229515739 +0000 UTC m=+33.991042669 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.288798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.288826 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.288834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.288846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.288857 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.330127 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.330185 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.330210 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.330229 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330312 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330367 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:47.330349283 +0000 UTC m=+34.091876193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330383 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330454 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330399 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330494 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330503 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330507 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330517 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330476 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:47.330452817 +0000 UTC m=+34.091979727 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330593 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:47.33057173 +0000 UTC m=+34.092098720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.330607 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:47.330599711 +0000 UTC m=+34.092126741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.390734 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.390781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.390793 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.390820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.390836 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.470610 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:19:34.055370801 +0000 UTC Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.493262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.493300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.493309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.493326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.493337 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.516640 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.516739 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.516792 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.516831 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.516947 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:39 crc kubenswrapper[4825]: E0122 15:24:39.517021 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.595424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.595458 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.595468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.595483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.595493 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.667876 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.667929 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"1bd22b5918f932c306fd69bad2fe64eafb64e955aeef1df5267f4a0294557952"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.670022 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerStarted","Data":"263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.670055 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerStarted","Data":"99bbdc9f04325da473b276731d40a90d4834a06a8f88484160894a6c81c4d2d3"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.672276 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8jk65" event={"ID":"cf928ed7-f98c-4ced-b3d7-cb4700d3a906","Type":"ContainerStarted","Data":"e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.672318 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8jk65" event={"ID":"cf928ed7-f98c-4ced-b3d7-cb4700d3a906","Type":"ContainerStarted","Data":"fc7ecbd5a82b78d1e1a6c17509f3aa2d42e0393c2b02472d3f204ea7aaf0ff30"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.674274 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.674305 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"4ff920ac3bcec2a0f6c60e684001728a74092e9ad118eebab33e48b7caafb953"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.683077 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ljkjt" event={"ID":"049abb37-810d-475f-b042-bceb43e81dd5","Type":"ContainerStarted","Data":"529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.683430 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ljkjt" event={"ID":"049abb37-810d-475f-b042-bceb43e81dd5","Type":"ContainerStarted","Data":"d1088f8fa479a73480ac269f336c0a644cbc55ed51e827ff2d64cf6a075b9db0"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.683923 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.698885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.698922 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.698930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.699567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.699601 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.700775 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.713775 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.727711 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.740710 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.749882 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.766086 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.776535 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.790121 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.801918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.802174 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.802254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.802329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.802402 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.803347 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.824508 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.838087 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.853005 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.865378 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.884752 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.896161 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.905305 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.905574 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.905680 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.905798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.906065 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:39Z","lastTransitionTime":"2026-01-22T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.908798 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.920483 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.929777 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.943225 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.952952 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.969151 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.982469 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:39 crc kubenswrapper[4825]: I0122 15:24:39.993854 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:39Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.009891 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.009929 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.009939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.009954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.009965 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.010938 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.022548 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.112029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.112295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.112304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.112319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.112328 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.215256 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.215298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.215307 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.215322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.215333 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.317233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.317288 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.317308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.317331 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.317373 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.419151 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.419199 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.419211 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.419231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.419246 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.471511 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:23:49.278674298 +0000 UTC Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.522067 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.522113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.522142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.522159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.522170 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.624241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.624479 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.624490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.624506 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.624516 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.688577 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" exitCode=0 Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.688656 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.688685 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.688697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.690154 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.692046 4825 generic.go:334] "Generic (PLEG): container finished" podID="85f26f27-4ca0-42df-a11b-fa27e42eb3c7" containerID="263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470" exitCode=0 Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.692080 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerDied","Data":"263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.710067 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.726610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.726648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.726658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.726674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.726687 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.727269 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.744013 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.756953 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.773533 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.784864 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.798276 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.809112 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.818534 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.828764 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.829332 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.829354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.829363 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.829378 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.829388 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.837723 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.849365 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.859237 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.871656 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.881896 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.894039 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.904182 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.916676 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.937768 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.939795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.939824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.939832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.939845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.939852 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:40Z","lastTransitionTime":"2026-01-22T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.953704 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.967780 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.985038 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:40 crc kubenswrapper[4825]: I0122 15:24:40.997474 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:40Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.009421 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.020705 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.031753 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.043221 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.043264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.043279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.043298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.043311 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.145567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.145692 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.145767 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.145837 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.145895 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.249728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.249775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.249791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.249818 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.249840 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.258040 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-k59vq"] Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.258420 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.260376 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.260632 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.260809 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.261025 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.273078 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.286269 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.303734 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.316402 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.340898 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.351625 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.351667 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.351676 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.351698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.351711 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.352602 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.367794 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.380735 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.390030 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.404175 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.415193 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.426412 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.435939 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.446151 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.446573 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c42dacaf-0842-4484-8d2d-4b36805194be-serviceca\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.446616 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c42dacaf-0842-4484-8d2d-4b36805194be-host\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.446659 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dwx\" (UniqueName: \"kubernetes.io/projected/c42dacaf-0842-4484-8d2d-4b36805194be-kube-api-access-b2dwx\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.454155 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.454190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.454200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.454213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.454222 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.472742 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:47:32.815732881 +0000 UTC Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.516321 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.516389 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:41 crc kubenswrapper[4825]: E0122 15:24:41.516457 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.516787 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:41 crc kubenswrapper[4825]: E0122 15:24:41.516855 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:41 crc kubenswrapper[4825]: E0122 15:24:41.516921 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.547767 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c42dacaf-0842-4484-8d2d-4b36805194be-serviceca\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.548049 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c42dacaf-0842-4484-8d2d-4b36805194be-host\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.548167 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2dwx\" (UniqueName: \"kubernetes.io/projected/c42dacaf-0842-4484-8d2d-4b36805194be-kube-api-access-b2dwx\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.548116 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c42dacaf-0842-4484-8d2d-4b36805194be-host\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.549364 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c42dacaf-0842-4484-8d2d-4b36805194be-serviceca\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.556368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.556404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.556415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.556433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.556446 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.567718 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2dwx\" (UniqueName: \"kubernetes.io/projected/c42dacaf-0842-4484-8d2d-4b36805194be-kube-api-access-b2dwx\") pod \"node-ca-k59vq\" (UID: \"c42dacaf-0842-4484-8d2d-4b36805194be\") " pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.583398 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k59vq" Jan 22 15:24:41 crc kubenswrapper[4825]: W0122 15:24:41.595081 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc42dacaf_0842_4484_8d2d_4b36805194be.slice/crio-e3636909cd861a543ca273f5b0af119cbb8739918b1eb7b1ed007a1caf4cf8b4 WatchSource:0}: Error finding container e3636909cd861a543ca273f5b0af119cbb8739918b1eb7b1ed007a1caf4cf8b4: Status 404 returned error can't find the container with id e3636909cd861a543ca273f5b0af119cbb8739918b1eb7b1ed007a1caf4cf8b4 Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.658374 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.658415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.658425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.658440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.658450 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.697901 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k59vq" event={"ID":"c42dacaf-0842-4484-8d2d-4b36805194be","Type":"ContainerStarted","Data":"e3636909cd861a543ca273f5b0af119cbb8739918b1eb7b1ed007a1caf4cf8b4"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.700037 4825 generic.go:334] "Generic (PLEG): container finished" podID="85f26f27-4ca0-42df-a11b-fa27e42eb3c7" containerID="8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5" exitCode=0 Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.700202 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerDied","Data":"8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.709420 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.709488 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.709504 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.709518 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.718972 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.737073 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.749939 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.760855 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.765097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.765169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.765183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.765200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.765214 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.773937 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.787359 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.803725 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.815516 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.834479 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.845411 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.854427 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.867853 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.869599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.869628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.869641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.869659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.869671 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.884562 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.900930 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:41Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.972079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.972125 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.972136 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.972154 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:41 crc kubenswrapper[4825]: I0122 15:24:41.972164 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:41Z","lastTransitionTime":"2026-01-22T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.073920 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.073971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.074011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.074027 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.074038 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.177204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.177250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.177261 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.177278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.177329 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.279769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.279819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.279835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.279855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.279870 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.382327 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.382394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.382416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.382447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.382468 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.473766 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:52:50.562009939 +0000 UTC Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.484630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.484676 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.484690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.484710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.484726 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.587144 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.587204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.587216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.587233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.587244 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.689730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.689774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.689781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.689795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.689804 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.714200 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k59vq" event={"ID":"c42dacaf-0842-4484-8d2d-4b36805194be","Type":"ContainerStarted","Data":"2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.716662 4825 generic.go:334] "Generic (PLEG): container finished" podID="85f26f27-4ca0-42df-a11b-fa27e42eb3c7" containerID="fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd" exitCode=0 Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.716710 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerDied","Data":"fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.741806 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.755875 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.771318 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.782154 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.791737 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.793437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.793503 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.793521 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.793539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.793575 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.802766 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.812852 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.827302 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.838803 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.859122 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.875973 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.890574 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.895828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.895871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.895880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.895898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.895910 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.905859 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.915620 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.933165 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.943028 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.958267 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.971585 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.983053 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.996164 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:42Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.998350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.998584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.998809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.998953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:42 crc kubenswrapper[4825]: I0122 15:24:42.999156 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:42Z","lastTransitionTime":"2026-01-22T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.007170 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.021304 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.033853 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.052153 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.066003 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.083426 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.095697 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.101514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.101617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.101639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.101677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.101695 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.105697 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.204391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.204514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.204550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.204581 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.204603 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.307109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.307183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.307241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.307269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.307286 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.378576 4825 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.409965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.410035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.410046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.410069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.410081 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.474228 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:40:01.100997929 +0000 UTC Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.512427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.512460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.512469 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.512484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.512493 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.516705 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.516806 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:43 crc kubenswrapper[4825]: E0122 15:24:43.516831 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.516857 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:43 crc kubenswrapper[4825]: E0122 15:24:43.517047 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:43 crc kubenswrapper[4825]: E0122 15:24:43.517141 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.533515 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.544146 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.557285 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.572430 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.593669 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.603078 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.614420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.614453 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.614462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.614478 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.614488 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.616034 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.628797 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.639331 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.652399 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.663917 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.676524 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.687888 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.698565 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.717513 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.717576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.717593 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.717614 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.717632 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.723673 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.726140 4825 generic.go:334] "Generic (PLEG): container finished" podID="85f26f27-4ca0-42df-a11b-fa27e42eb3c7" containerID="f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f" exitCode=0 Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.726205 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerDied","Data":"f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.741047 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.753124 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.769571 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.782415 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.794023 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.806605 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.820895 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.820925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.820933 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.820947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.820956 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.824359 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.839107 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.851914 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.870569 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.883958 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.894772 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.906667 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.920626 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:43Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.935815 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.935883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.935905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.935931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:43 crc kubenswrapper[4825]: I0122 15:24:43.935948 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:43Z","lastTransitionTime":"2026-01-22T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.038049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.038114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.038129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.038151 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.038162 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.141431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.141477 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.141488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.141502 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.141511 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.243725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.243792 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.243814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.243843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.243863 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.345865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.345902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.345911 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.345926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.345936 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.382088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.382122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.382131 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.382148 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.382157 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: E0122 15:24:44.393627 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.397104 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.397175 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.397193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.397216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.397242 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: E0122 15:24:44.410504 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.414112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.414166 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.414183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.414205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.414222 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: E0122 15:24:44.424932 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.427686 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.427715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.427724 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.427738 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.427747 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: E0122 15:24:44.439614 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.444058 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.444098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.444107 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.444122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.444132 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: E0122 15:24:44.455886 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: E0122 15:24:44.456046 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.457904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.457943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.457956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.457991 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.458004 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.474474 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:53:04.970550461 +0000 UTC Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.560261 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.560301 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.560310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.560325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.560339 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.663054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.663096 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.663124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.663146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.663161 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.731758 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerStarted","Data":"740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.746081 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.756892 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.765718 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.765765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.765779 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.765828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.765843 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.768737 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.778964 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.789498 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.804849 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.816967 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.833550 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.847901 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.866529 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.868182 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.868252 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.868265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.868284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.868296 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.883931 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.895709 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.909184 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.921155 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:44Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.971282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.971317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.971325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.971337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:44 crc kubenswrapper[4825]: I0122 15:24:44.971346 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:44Z","lastTransitionTime":"2026-01-22T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.074775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.074816 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.074825 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.074840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.074851 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.177731 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.177768 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.177782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.177803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.177817 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.281236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.281317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.281335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.281356 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.281371 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.384201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.384501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.384597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.384684 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.384770 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.474907 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:14:50.903157796 +0000 UTC Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.487960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.488054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.488072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.488096 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.488116 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.516241 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:45 crc kubenswrapper[4825]: E0122 15:24:45.516414 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.516828 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:45 crc kubenswrapper[4825]: E0122 15:24:45.517150 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.517019 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:45 crc kubenswrapper[4825]: E0122 15:24:45.517403 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.591004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.591419 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.591574 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.591713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.591845 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.695047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.695234 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.695246 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.695262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.695276 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.740508 4825 generic.go:334] "Generic (PLEG): container finished" podID="85f26f27-4ca0-42df-a11b-fa27e42eb3c7" containerID="740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19" exitCode=0 Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.740585 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerDied","Data":"740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.757140 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.775562 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.790225 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.797227 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.797274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.797290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.797311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.797327 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.803648 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.821123 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.838754 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.853556 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.864292 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.876648 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.885950 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.895020 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.898801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.898832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.898841 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.898858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.898869 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:45Z","lastTransitionTime":"2026-01-22T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.909070 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.919797 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:45 crc kubenswrapper[4825]: I0122 15:24:45.935672 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:45Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.002701 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.002789 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.002806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.002859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.002876 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.106598 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.106642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.106651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.106665 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.106675 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.208889 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.208949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.209009 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.209036 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.209055 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.311008 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.311050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.311062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.311080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.311091 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.414652 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.414727 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.414748 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.414776 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.414797 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.475866 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:50:52.467114394 +0000 UTC Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.517172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.517216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.517224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.517236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.517247 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.619938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.619992 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.620003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.620021 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.620029 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.722020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.722054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.722063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.722079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.722088 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.746865 4825 generic.go:334] "Generic (PLEG): container finished" podID="85f26f27-4ca0-42df-a11b-fa27e42eb3c7" containerID="bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b" exitCode=0 Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.746916 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerDied","Data":"bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.754350 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.755001 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.755051 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.762916 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.779659 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.780250 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.782661 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.790918 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.804605 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.814839 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.822554 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.823890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.823932 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.823945 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.823963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.823991 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.833055 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.842033 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.852755 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.863744 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.876405 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.890084 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.901606 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.927459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.927486 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.927494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.927506 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.927514 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:46Z","lastTransitionTime":"2026-01-22T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.927402 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.941623 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.953545 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.962441 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.976729 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:46 crc kubenswrapper[4825]: I0122 15:24:46.988784 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:46Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.005520 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.018426 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.029041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.029080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.029090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.029106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.029117 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.032406 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.052691 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.066213 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.077951 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.092540 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.103848 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.112734 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.131821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.131871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.131883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.131902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.131915 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.236055 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.236114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.236129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.236155 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.236167 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.309247 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.309454 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:03.30942577 +0000 UTC m=+50.070952680 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.339620 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.339686 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.339745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.339777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.339800 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.410554 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.410651 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.410721 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.410775 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.410881 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.410887 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411012 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:03.410959376 +0000 UTC m=+50.172486286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411025 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411032 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411054 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411076 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411104 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411150 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:03.411119151 +0000 UTC m=+50.172646111 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411191 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:03.411173093 +0000 UTC m=+50.172700113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411223 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.411295 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:03.411264126 +0000 UTC m=+50.172791076 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.442318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.442384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.442406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.442426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.442439 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.476072 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:22:30.786362462 +0000 UTC Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.516128 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.516167 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.516152 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.516270 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.516350 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:47 crc kubenswrapper[4825]: E0122 15:24:47.516449 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.524208 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.537927 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.547538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.547573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.547584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.547602 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.547614 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.566149 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.584903 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.599312 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.613608 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.623048 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.644911 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.651101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.651144 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.651152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.651167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.651176 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.659438 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.672843 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.687516 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.733705 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.753342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.753371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.753379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.753392 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.753400 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.755389 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.757081 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.767179 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.780890 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:47Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.856756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.856833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.856859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.856890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.856911 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.960136 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.960407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.960492 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.960578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:47 crc kubenswrapper[4825]: I0122 15:24:47.960666 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:47Z","lastTransitionTime":"2026-01-22T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.063030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.063130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.063151 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.063180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.063200 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.165330 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.165364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.165372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.165385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.165394 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.268098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.268131 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.268140 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.268158 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.268167 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.370698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.370743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.370754 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.370771 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.370782 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.473791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.473841 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.473858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.473879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.473898 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.477026 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:46:26.428500891 +0000 UTC Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.582249 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.582328 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.582355 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.582385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.582408 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.684916 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.685177 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.685300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.685398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.685462 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.764229 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" event={"ID":"85f26f27-4ca0-42df-a11b-fa27e42eb3c7","Type":"ContainerStarted","Data":"df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.764364 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.777826 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.788049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.788085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.788093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.788139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.788152 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.791037 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.811045 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.824200 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.842383 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.857996 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.874447 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.886893 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.890296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.890338 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.890351 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.890369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.890381 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.895968 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.908211 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.917458 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.928619 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.942495 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.952364 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:48Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.992626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.992668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.992679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.992697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:48 crc kubenswrapper[4825]: I0122 15:24:48.992707 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:48Z","lastTransitionTime":"2026-01-22T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.095781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.095822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.095831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.095848 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.095859 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.198214 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.198290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.198315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.198347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.198387 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.301902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.301957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.301974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.302025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.302042 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.404617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.404964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.405135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.405240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.405329 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.477193 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:37:26.759006543 +0000 UTC Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.508068 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.508309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.508397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.508521 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.508606 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.516655 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.516703 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:49 crc kubenswrapper[4825]: E0122 15:24:49.516765 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.516666 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:49 crc kubenswrapper[4825]: E0122 15:24:49.516935 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:49 crc kubenswrapper[4825]: E0122 15:24:49.517010 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.611241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.611281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.611292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.611310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.611322 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.715803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.716052 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.716113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.716172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.716230 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.768640 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/0.log" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.771506 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353" exitCode=1 Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.771546 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.772121 4825 scope.go:117] "RemoveContainer" containerID="07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.791152 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.808118 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.819052 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.819111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.819129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.819917 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.820036 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.820099 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.836956 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.847043 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.860075 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.870615 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.880409 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.895913 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.907750 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.922059 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.922329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.922348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.922357 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.922372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.922382 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:49Z","lastTransitionTime":"2026-01-22T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.932353 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.943267 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:49 crc kubenswrapper[4825]: I0122 15:24:49.953801 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:49Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.024726 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.024753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.024761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.024774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.024782 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.127931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.128043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.128085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.128124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.128148 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.231369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.231652 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.231781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.231870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.231963 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.334943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.335004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.335015 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.335030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.335040 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.443325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.443355 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.443364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.443380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.443388 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.477732 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:26:04.883376378 +0000 UTC Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.545603 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.545647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.545657 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.545672 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.545683 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.648637 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.648712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.648735 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.648763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.648787 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.751574 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.751623 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.751634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.751659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.751672 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.783046 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/0.log" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.785445 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.785617 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.803575 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.817022 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.833280 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.849290 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.853376 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.853427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.853444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.853467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.853484 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.864849 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.915846 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.929398 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.944068 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.955275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.955313 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.955323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.955337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.955346 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:50Z","lastTransitionTime":"2026-01-22T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.956383 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.970591 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:50 crc kubenswrapper[4825]: I0122 15:24:50.989498 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:50Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.004882 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.023321 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.041408 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.058336 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.058565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.058713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.058850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.058971 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.160451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.160482 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.160491 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.160504 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.160512 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.262046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.262106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.262117 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.262132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.262142 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.288653 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf"] Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.289071 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.292305 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.293744 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.310385 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.325620 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.335928 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.345505 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.353999 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.361171 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17f70b06-0bde-412f-954f-fcfa00e88b6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.361243 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17f70b06-0bde-412f-954f-fcfa00e88b6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.361283 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs8wn\" (UniqueName: \"kubernetes.io/projected/17f70b06-0bde-412f-954f-fcfa00e88b6f-kube-api-access-qs8wn\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.361329 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17f70b06-0bde-412f-954f-fcfa00e88b6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.364383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.364426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.364439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.364458 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.364471 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.365501 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.382526 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.403569 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.422385 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.452925 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.462798 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17f70b06-0bde-412f-954f-fcfa00e88b6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.462870 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs8wn\" (UniqueName: \"kubernetes.io/projected/17f70b06-0bde-412f-954f-fcfa00e88b6f-kube-api-access-qs8wn\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.462895 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17f70b06-0bde-412f-954f-fcfa00e88b6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.462925 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17f70b06-0bde-412f-954f-fcfa00e88b6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.463575 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17f70b06-0bde-412f-954f-fcfa00e88b6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.463673 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17f70b06-0bde-412f-954f-fcfa00e88b6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.465899 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.466349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.466416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.466439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.466467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.466489 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.470365 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17f70b06-0bde-412f-954f-fcfa00e88b6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.478100 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:24:20.343063868 +0000 UTC Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.479008 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs8wn\" (UniqueName: \"kubernetes.io/projected/17f70b06-0bde-412f-954f-fcfa00e88b6f-kube-api-access-qs8wn\") pod \"ovnkube-control-plane-749d76644c-m4zbf\" (UID: \"17f70b06-0bde-412f-954f-fcfa00e88b6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.486184 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.505950 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.516511 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.516520 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.516821 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:51 crc kubenswrapper[4825]: E0122 15:24:51.517033 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:51 crc kubenswrapper[4825]: E0122 15:24:51.517355 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:51 crc kubenswrapper[4825]: E0122 15:24:51.517531 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.521536 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.537683 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:51Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.569151 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.569397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.569485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.569575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.569689 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.600579 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.672014 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.672256 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.672361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.672444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.672517 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.774761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.774826 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.774848 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.774881 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.774903 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.877075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.877143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.877160 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.877183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.877200 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.979689 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.979729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.979741 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.979760 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:51 crc kubenswrapper[4825]: I0122 15:24:51.979770 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:51Z","lastTransitionTime":"2026-01-22T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: W0122 15:24:52.002129 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17f70b06_0bde_412f_954f_fcfa00e88b6f.slice/crio-d651aa9b94cb58290e0817505b314e64c4eef13866322f5b1150edbcf32ce198 WatchSource:0}: Error finding container d651aa9b94cb58290e0817505b314e64c4eef13866322f5b1150edbcf32ce198: Status 404 returned error can't find the container with id d651aa9b94cb58290e0817505b314e64c4eef13866322f5b1150edbcf32ce198 Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.023923 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hrdl8"] Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.024308 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: E0122 15:24:52.024359 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.040911 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.058834 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.069282 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29dc\" (UniqueName: \"kubernetes.io/projected/538e3056-0e80-4b71-ada6-b7440b283761-kube-api-access-r29dc\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.069475 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.079414 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.084204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.084241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.084253 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.084270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.084282 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.092701 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.104043 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.119753 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.132683 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.145994 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.157846 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.168645 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.170170 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29dc\" (UniqueName: \"kubernetes.io/projected/538e3056-0e80-4b71-ada6-b7440b283761-kube-api-access-r29dc\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.170262 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: E0122 15:24:52.170430 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:52 crc kubenswrapper[4825]: E0122 15:24:52.170524 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:52.670496892 +0000 UTC m=+39.432023802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.181945 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.186649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.186681 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.186690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.186704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.186716 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.188008 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29dc\" (UniqueName: \"kubernetes.io/projected/538e3056-0e80-4b71-ada6-b7440b283761-kube-api-access-r29dc\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.197201 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.216397 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.229066 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.248738 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.260518 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.289501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.289525 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.289533 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.289547 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.289556 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.391812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.391887 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.391908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.391938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.391960 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.479243 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:55:18.712659384 +0000 UTC Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.494235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.494272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.494281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.494297 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.494306 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.596936 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.597019 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.597038 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.597062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.597078 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.674902 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:52 crc kubenswrapper[4825]: E0122 15:24:52.675065 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:52 crc kubenswrapper[4825]: E0122 15:24:52.675145 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:53.675124531 +0000 UTC m=+40.436651451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.699250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.699283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.699291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.699305 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.699313 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.792714 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" event={"ID":"17f70b06-0bde-412f-954f-fcfa00e88b6f","Type":"ContainerStarted","Data":"766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.792756 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" event={"ID":"17f70b06-0bde-412f-954f-fcfa00e88b6f","Type":"ContainerStarted","Data":"d651aa9b94cb58290e0817505b314e64c4eef13866322f5b1150edbcf32ce198"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.794851 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/1.log" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.795318 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/0.log" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.798492 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce" exitCode=1 Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.798544 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.798596 4825 scope.go:117] "RemoveContainer" containerID="07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.799251 4825 scope.go:117] "RemoveContainer" containerID="6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce" Jan 22 15:24:52 crc kubenswrapper[4825]: E0122 15:24:52.799411 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.800832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.800851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.800859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.800870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.800879 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.825215 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.849309 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.864940 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.879644 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.892571 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.902800 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.902838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.902847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.902863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.902872 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:52Z","lastTransitionTime":"2026-01-22T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.908197 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.920399 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.934736 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.950372 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.969281 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.985022 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:52 crc kubenswrapper[4825]: I0122 15:24:52.997677 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:52Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.005116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.005159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.005167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.005180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.005191 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.010065 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.025274 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.035468 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.045145 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.107909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.107946 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.107958 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.107994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.108011 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.210037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.210085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.210098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.210114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.210144 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.312954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.313062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.313084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.313111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.313130 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.415152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.415190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.415200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.415216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.415228 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.480124 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:28:21.446300887 +0000 UTC Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.516230 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.516230 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.516351 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:53 crc kubenswrapper[4825]: E0122 15:24:53.516567 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.516631 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:53 crc kubenswrapper[4825]: E0122 15:24:53.516766 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:53 crc kubenswrapper[4825]: E0122 15:24:53.516899 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:53 crc kubenswrapper[4825]: E0122 15:24:53.517020 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.518425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.518483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.518505 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.518526 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.518542 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.532234 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.549760 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.564098 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.575626 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.586028 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.609999 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.620652 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.620820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.620854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.620871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.620887 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.620896 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.634693 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.645824 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.661133 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.671055 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.679848 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.684405 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:53 crc kubenswrapper[4825]: E0122 15:24:53.684535 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:53 crc kubenswrapper[4825]: E0122 15:24:53.684600 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:55.684582653 +0000 UTC m=+42.446109563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.691615 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.706797 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.721180 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.722803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.722840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.722848 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.722863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.722874 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.737227 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.803640 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/1.log" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.810922 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" event={"ID":"17f70b06-0bde-412f-954f-fcfa00e88b6f","Type":"ContainerStarted","Data":"6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.825133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.825395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.825519 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.825635 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.825750 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.830116 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.859861 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07e063ccc1b5dca63090250db98b4104084f86a334ac4d848629a0159b9a8353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:49Z\\\",\\\"message\\\":\\\"mers/externalversions/factory.go:140\\\\nI0122 15:24:48.995764 6112 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.995941 6112 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996013 6112 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:48.996028 6112 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 15:24:48.996036 6112 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 15:24:48.996325 6112 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996417 6112 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:48.996444 6112 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:48.996680 6112 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 15:24:48.996730 6112 factory.go:656] Stopping watch factory\\\\nI0122 15:24:48.996742 6112 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.869954 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.881268 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.904336 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.921920 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.927730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.927766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.927778 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.927794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.927804 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:53Z","lastTransitionTime":"2026-01-22T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.951477 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.965298 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.973902 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.983790 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:53 crc kubenswrapper[4825]: I0122 15:24:53.992409 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:53Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.002567 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.010776 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.020306 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.029258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.029282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.029290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.029304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.029312 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.029820 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.041606 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.131856 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.131897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.131906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.131921 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.131930 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.233815 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.234048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.234145 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.234238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.234319 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.337416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.337470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.337496 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.337538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.337556 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.439742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.440069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.440161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.440253 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.440343 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.480837 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:56:51.180348796 +0000 UTC Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.542534 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.542578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.542592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.542609 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.542622 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.646524 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.646568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.646579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.646596 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.646609 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.673661 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.673742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.673766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.673797 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.673820 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: E0122 15:24:54.695847 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.700686 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.700739 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.700751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.700770 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.700782 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: E0122 15:24:54.717600 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.721227 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.721265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.721278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.721292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.721302 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: E0122 15:24:54.737610 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.740690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.740723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.740733 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.740749 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.740760 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: E0122 15:24:54.752047 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.756743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.756782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.756790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.756804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.756814 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: E0122 15:24:54.769866 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:24:54Z is after 2025-08-24T17:21:41Z" Jan 22 15:24:54 crc kubenswrapper[4825]: E0122 15:24:54.769996 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.771387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.771421 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.771431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.771446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.771482 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.874633 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.874710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.874745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.874781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.874805 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.977729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.977777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.977793 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.977813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:54 crc kubenswrapper[4825]: I0122 15:24:54.977829 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:54Z","lastTransitionTime":"2026-01-22T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.080713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.080767 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.080781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.080803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.080819 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.184111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.184164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.184181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.184203 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.184219 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.286798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.286846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.286934 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.286952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.286963 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.389530 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.389569 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.389581 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.389596 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.389606 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.482049 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:44:21.45085208 +0000 UTC Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.492642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.492717 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.492740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.492772 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.492793 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.516683 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.516713 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.516770 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.516778 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:55 crc kubenswrapper[4825]: E0122 15:24:55.516886 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:55 crc kubenswrapper[4825]: E0122 15:24:55.517143 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:24:55 crc kubenswrapper[4825]: E0122 15:24:55.517206 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:55 crc kubenswrapper[4825]: E0122 15:24:55.517313 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.595912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.596077 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.596117 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.596159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.596187 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.700428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.700482 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.700501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.700521 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.700537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.704332 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:55 crc kubenswrapper[4825]: E0122 15:24:55.704442 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:55 crc kubenswrapper[4825]: E0122 15:24:55.704488 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:24:59.704471658 +0000 UTC m=+46.465998568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.802565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.802613 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.802623 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.802637 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.802646 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.905032 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.905129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.905147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.905170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:55 crc kubenswrapper[4825]: I0122 15:24:55.905188 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:55Z","lastTransitionTime":"2026-01-22T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.007479 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.007536 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.007554 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.007577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.007594 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.110000 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.110042 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.110054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.110069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.110079 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.212039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.212115 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.212133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.212155 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.212212 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.314254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.314328 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.314350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.314382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.314404 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.416941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.417010 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.417023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.417044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.417055 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.482652 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:27:08.984414519 +0000 UTC Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.519511 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.519552 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.519563 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.519581 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.519593 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.622061 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.622114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.622130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.622153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.622176 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.725073 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.725147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.725169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.725199 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.725219 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.827625 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.827686 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.827700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.827722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.827736 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.930506 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.930537 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.930546 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.930559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:56 crc kubenswrapper[4825]: I0122 15:24:56.930567 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:56Z","lastTransitionTime":"2026-01-22T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.036694 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.036744 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.036753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.036768 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.036777 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.139395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.139447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.139461 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.139480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.139495 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.242375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.242435 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.242459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.242487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.242507 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.345634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.345729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.345754 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.345788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.345817 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.448833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.448877 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.448886 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.448901 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.448911 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.483759 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:14:21.524084781 +0000 UTC Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.516200 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.516254 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.516196 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.516376 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:57 crc kubenswrapper[4825]: E0122 15:24:57.516575 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:24:57 crc kubenswrapper[4825]: E0122 15:24:57.516792 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:57 crc kubenswrapper[4825]: E0122 15:24:57.516968 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:57 crc kubenswrapper[4825]: E0122 15:24:57.517126 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.551860 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.551897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.551906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.551918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.551928 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.654657 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.654702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.654719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.654744 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.654766 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.757717 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.757764 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.757776 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.757792 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.757802 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.860113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.860159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.860167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.860182 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.860192 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.962843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.962884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.962899 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.962918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:57 crc kubenswrapper[4825]: I0122 15:24:57.962933 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:57Z","lastTransitionTime":"2026-01-22T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.065119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.065164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.065173 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.065187 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.065197 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.167790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.167852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.167865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.167883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.167897 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.269801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.270069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.270084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.270103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.270114 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.372875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.372917 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.372926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.372944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.372953 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.475358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.475396 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.475404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.475423 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.475434 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.484155 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:39:34.533483953 +0000 UTC Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.577909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.577956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.577967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.577998 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.578009 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.680157 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.680209 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.680221 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.680239 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.680253 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.782881 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.782938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.782950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.782970 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.783010 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.886070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.886121 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.886138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.886163 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.886180 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.988452 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.988525 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.988543 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.988570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:58 crc kubenswrapper[4825]: I0122 15:24:58.988588 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:58Z","lastTransitionTime":"2026-01-22T15:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.091205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.091249 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.091258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.091275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.091285 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.193913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.193960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.193972 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.194013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.194332 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.297506 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.297564 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.297575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.297590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.297600 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.400847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.400913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.400933 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.400959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.401005 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.484868 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:10:31.854593094 +0000 UTC Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.502603 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.502661 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.502679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.502705 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.502727 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.515968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.516044 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:24:59 crc kubenswrapper[4825]: E0122 15:24:59.516129 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.516153 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.515968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:24:59 crc kubenswrapper[4825]: E0122 15:24:59.516228 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:24:59 crc kubenswrapper[4825]: E0122 15:24:59.516305 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:24:59 crc kubenswrapper[4825]: E0122 15:24:59.516377 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.605813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.605865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.605879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.605896 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.605907 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.708394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.708448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.708466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.708491 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.708510 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.740404 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:24:59 crc kubenswrapper[4825]: E0122 15:24:59.740605 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:59 crc kubenswrapper[4825]: E0122 15:24:59.740717 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:07.740685309 +0000 UTC m=+54.502212259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.811241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.811289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.811304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.811325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.811339 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.914147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.914202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.914218 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.914235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:24:59 crc kubenswrapper[4825]: I0122 15:24:59.914245 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:24:59Z","lastTransitionTime":"2026-01-22T15:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.017512 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.017589 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.017612 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.017643 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.017665 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.127584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.127644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.127661 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.127689 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.127706 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.229973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.230026 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.230035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.230057 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.230071 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.332725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.332759 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.332767 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.332781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.332790 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.435547 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.435601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.435616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.435640 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.435658 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.485400 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:30:05.029697606 +0000 UTC Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.552206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.552273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.552294 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.552323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.552347 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.654752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.654876 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.654932 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.654961 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.655002 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.757647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.757710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.757727 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.757752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.757770 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.861054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.861130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.861153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.861184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.861207 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.964112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.964258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.964273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.964289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:00 crc kubenswrapper[4825]: I0122 15:25:00.964322 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:00Z","lastTransitionTime":"2026-01-22T15:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.067656 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.067709 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.067725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.067743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.067754 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.171154 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.171216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.171236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.171262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.171280 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.274606 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.274651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.274667 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.274685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.274702 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.377047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.377102 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.377119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.377146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.377164 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.481411 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.481489 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.482267 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.482319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.482340 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.485707 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:00:25.935366467 +0000 UTC Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.516521 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.516562 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:01 crc kubenswrapper[4825]: E0122 15:25:01.516684 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.516736 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.516756 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:01 crc kubenswrapper[4825]: E0122 15:25:01.516871 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:01 crc kubenswrapper[4825]: E0122 15:25:01.517007 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:01 crc kubenswrapper[4825]: E0122 15:25:01.517160 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.543086 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.544227 4825 scope.go:117] "RemoveContainer" containerID="6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce" Jan 22 15:25:01 crc kubenswrapper[4825]: E0122 15:25:01.544513 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.560933 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.574696 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.584669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.584712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.584737 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.584762 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.584778 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.595093 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.615688 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.647605 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.660257 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.671952 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.685762 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.686420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.686456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.686476 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.686504 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.686526 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.703040 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.717689 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.731436 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.745942 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.759209 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.772468 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.784220 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.788440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.788484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.788495 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.788510 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.788522 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.795285 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:01Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.891333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.891391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.891407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.891424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.891436 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.994094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.994175 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.994195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.994226 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:01 crc kubenswrapper[4825]: I0122 15:25:01.994251 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:01Z","lastTransitionTime":"2026-01-22T15:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.097181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.097250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.097278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.097308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.097330 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.200487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.200544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.200562 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.200588 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.200607 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.304106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.304182 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.304206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.304236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.304253 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.408457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.408525 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.408542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.408564 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.408583 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.486178 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:24:12.362871902 +0000 UTC Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.511556 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.511621 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.511638 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.511703 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.511725 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.614369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.614417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.614435 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.614459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.614477 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.717246 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.717295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.717308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.717326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.717350 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.820020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.820103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.820116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.820134 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.820146 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.922373 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.922407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.922432 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.922445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:02 crc kubenswrapper[4825]: I0122 15:25:02.922452 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:02Z","lastTransitionTime":"2026-01-22T15:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.024386 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.024425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.024434 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.024447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.024456 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.127319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.127858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.127878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.127902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.127916 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.347650 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.348191 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:35.348104866 +0000 UTC m=+82.109631816 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.349838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.349902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.349926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.349955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.350015 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.449034 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.449119 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.449160 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.449195 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449224 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449272 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449292 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449291 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449329 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449331 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449375 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449386 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:35.44936019 +0000 UTC m=+82.210887140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449393 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449418 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:35.449404741 +0000 UTC m=+82.210931691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449441 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:35.449428912 +0000 UTC m=+82.210955852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.449461 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:35.449450872 +0000 UTC m=+82.210977822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.453606 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.453653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.453669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.453693 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.453711 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.487258 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:36:14.725583881 +0000 UTC Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.516804 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.516821 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.517017 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.517209 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.517898 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.518087 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.518231 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:03 crc kubenswrapper[4825]: E0122 15:25:03.518372 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.543668 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.557709 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.557782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.557805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.557835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.557857 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.563333 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.582111 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.603390 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.618044 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.632363 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.642471 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.656518 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.659823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.659865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.659878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.659896 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.659910 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.668521 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.678902 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.692578 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.706327 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.723464 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.736951 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.756143 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.761806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.761844 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.761854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.761871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.761883 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.767668 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:03Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.864550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.864781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.864818 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.864846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.864868 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.967637 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.967715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.967732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.967759 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:03 crc kubenswrapper[4825]: I0122 15:25:03.967776 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:03Z","lastTransitionTime":"2026-01-22T15:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.071413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.071485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.071506 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.071536 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.071556 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.175199 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.175255 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.175272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.175299 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.175319 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.277812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.277851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.277862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.277880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.277892 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.380377 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.380441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.380459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.380483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.380501 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.483169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.483237 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.483253 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.483277 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.483294 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.487730 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:07:03.472195787 +0000 UTC Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.585616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.585670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.585691 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.585715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.585731 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.689299 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.689364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.689403 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.689434 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.689455 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.796651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.796715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.796738 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.796763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.796779 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.899442 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.899503 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.899520 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.899547 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.899564 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.923835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.923908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.923932 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.923963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.924012 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: E0122 15:25:04.945569 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:04Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.950280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.950339 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.950358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.950383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.950400 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: E0122 15:25:04.968221 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:04Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.973135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.973213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.973248 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.973281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.973306 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:04 crc kubenswrapper[4825]: E0122 15:25:04.993230 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:04Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.997920 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.998026 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.998065 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.998094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:04 crc kubenswrapper[4825]: I0122 15:25:04.998117 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:04Z","lastTransitionTime":"2026-01-22T15:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.017468 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:05Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.021908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.021955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.021969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.022004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.022017 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.039934 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:05Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.040091 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.041948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.042048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.042077 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.042108 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.042129 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.145264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.145320 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.145337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.145359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.145376 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.247940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.248009 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.248017 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.248031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.248040 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.349777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.349823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.349841 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.349864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.349881 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.452531 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.452577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.452588 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.452607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.452624 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.488240 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:15:04.280500986 +0000 UTC Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.516692 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.516733 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.516829 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.517085 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.517107 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.517225 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.517371 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:05 crc kubenswrapper[4825]: E0122 15:25:05.517534 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.555832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.555880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.555892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.555912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.555925 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.658804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.658831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.658839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.658853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.658861 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.761727 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.761777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.761787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.761805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.761815 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.864308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.864350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.864359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.864371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.864380 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.971348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.971417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.971447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.971478 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:05 crc kubenswrapper[4825]: I0122 15:25:05.971501 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:05Z","lastTransitionTime":"2026-01-22T15:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.074261 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.074323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.074335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.074356 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.074369 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.177223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.177286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.177308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.177337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.177359 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.280291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.280362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.280378 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.280398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.280414 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.383272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.383324 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.383341 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.383369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.383386 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.485391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.485430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.485438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.485451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.485459 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.488913 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:13:48.159618618 +0000 UTC Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.588095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.588128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.588137 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.588152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.588161 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.691176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.691225 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.691235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.691254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.691288 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.793789 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.793851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.793868 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.793891 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.793910 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.897726 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.897786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.897804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.897829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:06 crc kubenswrapper[4825]: I0122 15:25:06.897845 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:06Z","lastTransitionTime":"2026-01-22T15:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.001041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.001122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.001139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.001161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.001176 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.028437 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.041451 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.044029 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.059789 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.081144 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.094857 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.104171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.104234 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.104254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.104280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.104298 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.124749 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.144489 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.157808 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.173109 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.205363 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.206057 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.206091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.206106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.206126 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.206142 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.223533 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.237222 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.255766 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.269429 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.283317 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.295709 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.306673 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:07Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.308346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.308379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.308391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.308407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.308418 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.410653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.410706 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.410726 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.410750 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.410764 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.489083 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:07:52.376172722 +0000 UTC Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.514538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.514570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.514588 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.514606 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.514616 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.516041 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.516040 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:07 crc kubenswrapper[4825]: E0122 15:25:07.516803 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.516245 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.516090 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:07 crc kubenswrapper[4825]: E0122 15:25:07.516930 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:07 crc kubenswrapper[4825]: E0122 15:25:07.517031 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:07 crc kubenswrapper[4825]: E0122 15:25:07.517125 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.617127 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.617156 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.617164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.617177 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.617186 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.720277 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.720345 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.720369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.720397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.720419 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.790740 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:07 crc kubenswrapper[4825]: E0122 15:25:07.790850 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:25:07 crc kubenswrapper[4825]: E0122 15:25:07.790912 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:23.79089781 +0000 UTC m=+70.552424720 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.823646 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.823698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.823716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.823749 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.823765 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.927429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.927489 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.927507 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.927532 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:07 crc kubenswrapper[4825]: I0122 15:25:07.927548 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:07Z","lastTransitionTime":"2026-01-22T15:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.030003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.030041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.030049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.030064 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.030073 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.133138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.133183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.133194 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.133208 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.133218 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.235599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.235645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.235657 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.235676 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.235689 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.339168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.339240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.339267 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.339297 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.339318 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.441694 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.441788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.441809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.441836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.441854 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.500076 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:03:59.408265253 +0000 UTC Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.545531 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.545610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.545634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.545663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.545685 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.649501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.649559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.649576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.649601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.649619 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.752879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.752972 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.753075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.753107 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.753127 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.856269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.856345 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.856363 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.856388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.856405 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.960690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.960796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.960821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.960896 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:08 crc kubenswrapper[4825]: I0122 15:25:08.960920 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:08Z","lastTransitionTime":"2026-01-22T15:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.064736 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.064793 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.064810 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.064834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.064852 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.168059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.168109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.168122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.168141 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.168156 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.271279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.271314 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.271332 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.271348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.271358 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.373677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.373711 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.373721 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.373737 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.373752 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.476217 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.476264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.476275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.476290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.476301 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.500930 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:25:40.031441059 +0000 UTC Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.516572 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.516583 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.516689 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:09 crc kubenswrapper[4825]: E0122 15:25:09.516773 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.516818 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:09 crc kubenswrapper[4825]: E0122 15:25:09.517034 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:09 crc kubenswrapper[4825]: E0122 15:25:09.517073 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:09 crc kubenswrapper[4825]: E0122 15:25:09.517191 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.579156 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.579217 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.579234 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.579258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.579281 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.682785 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.682864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.682890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.682922 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.682943 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.785809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.785861 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.785871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.785890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.785906 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.889042 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.889119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.889142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.889171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.889194 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.992728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.992809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.992826 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.992857 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:09 crc kubenswrapper[4825]: I0122 15:25:09.992876 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:09Z","lastTransitionTime":"2026-01-22T15:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.094668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.094723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.094733 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.094748 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.094758 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.197375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.197451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.197472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.197501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.197522 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.299814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.299864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.299875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.299894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.299908 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.407671 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.407751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.407774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.407804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.407827 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.502025 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:17:57.608183272 +0000 UTC Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.509781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.509812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.509822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.509844 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.509855 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.612359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.612424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.612442 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.612468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.612491 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.715764 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.715813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.715835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.715863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.715885 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.818901 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.818954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.818973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.819039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.819056 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.922088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.922184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.922215 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.922245 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:10 crc kubenswrapper[4825]: I0122 15:25:10.922268 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:10Z","lastTransitionTime":"2026-01-22T15:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.025190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.025224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.025233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.025246 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.025255 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.127902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.127955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.127971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.128034 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.128069 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.230938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.231048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.231066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.231087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.231102 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.333147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.333213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.333236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.333264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.333287 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.435754 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.435804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.435819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.435837 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.435848 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.502847 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:43:42.608924968 +0000 UTC Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.516286 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.516321 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.516415 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:11 crc kubenswrapper[4825]: E0122 15:25:11.516484 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.516524 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:11 crc kubenswrapper[4825]: E0122 15:25:11.516718 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:11 crc kubenswrapper[4825]: E0122 15:25:11.516841 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:11 crc kubenswrapper[4825]: E0122 15:25:11.516917 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.537288 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.537335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.537346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.537361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.537372 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.639455 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.639516 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.639535 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.639561 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.639576 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.742712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.742783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.742804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.742833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.742856 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.845966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.846071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.846098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.846128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.846151 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.949047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.949098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.949112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.949132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:11 crc kubenswrapper[4825]: I0122 15:25:11.949148 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:11Z","lastTransitionTime":"2026-01-22T15:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.052379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.052445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.052462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.052486 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.052504 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.155555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.155594 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.155604 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.155620 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.155629 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.258332 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.258377 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.258387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.258404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.258415 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.361635 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.362016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.362034 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.362062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.362083 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.465275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.465362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.465387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.465421 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.465446 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.503112 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:15:41.740106773 +0000 UTC Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.568973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.569148 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.569170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.569194 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.569211 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.672761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.672821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.672837 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.672862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.672883 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.776316 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.776665 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.776806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.776940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.777119 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.880220 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.880288 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.880306 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.880332 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.880349 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.983011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.983390 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.983550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.983697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:12 crc kubenswrapper[4825]: I0122 15:25:12.983829 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:12Z","lastTransitionTime":"2026-01-22T15:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.090542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.090730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.090957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.091303 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.091596 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.194132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.194172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.194180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.194195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.194204 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.297370 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.297415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.297427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.297444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.297456 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.400419 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.400460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.400470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.400485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.400495 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.502752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.502787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.502796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.502809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.502820 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.503297 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:19:50.979151738 +0000 UTC Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.516553 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.516584 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:13 crc kubenswrapper[4825]: E0122 15:25:13.516716 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.516784 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.516812 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:13 crc kubenswrapper[4825]: E0122 15:25:13.516882 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:13 crc kubenswrapper[4825]: E0122 15:25:13.516960 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:13 crc kubenswrapper[4825]: E0122 15:25:13.517150 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.533655 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.545303 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.562059 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.573745 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.586539 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.601231 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.606413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.606468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.606488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.606506 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.606519 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.619048 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.635023 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.647113 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.669785 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.688540 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.699787 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.708430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.708465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.708476 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.708491 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.708502 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.715711 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.728827 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.744367 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.753460 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.763578 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:13Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.810816 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.811181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.811336 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.811511 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.811651 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.913954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.913996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.914023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.914038 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:13 crc kubenswrapper[4825]: I0122 15:25:13.914047 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:13Z","lastTransitionTime":"2026-01-22T15:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.017611 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.017674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.017692 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.017716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.017734 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.120134 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.120443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.120582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.120704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.120784 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.223645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.223690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.223701 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.223720 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.223733 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.325914 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.325954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.325962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.326006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.326019 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.429162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.429207 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.429217 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.429235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.429247 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.504028 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:56:35.551098097 +0000 UTC Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.532320 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.532366 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.532374 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.532389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.532398 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.634884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.634947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.634965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.635012 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.635030 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.737673 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.737703 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.737713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.737728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.737738 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.839910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.839952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.839964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.839994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.840006 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.943038 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.943086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.943097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.943116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:14 crc kubenswrapper[4825]: I0122 15:25:14.943129 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:14Z","lastTransitionTime":"2026-01-22T15:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.046361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.046438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.046457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.046483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.046503 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.094714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.094753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.094761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.094777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.094786 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.113261 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.116929 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.116963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.116971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.116998 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.117009 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.134906 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.138846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.138909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.138942 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.138959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.138970 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.150520 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.154079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.154120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.154132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.154148 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.154159 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.167621 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.171555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.171625 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.171638 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.171653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.171665 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.183654 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.183845 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.185808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.185847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.185859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.185876 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.185889 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.287658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.287730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.287747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.287765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.287779 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.390622 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.390659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.390667 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.390680 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.390688 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.493604 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.493690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.493706 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.493728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.493745 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.504192 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:18:04.344919091 +0000 UTC Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.516733 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.516759 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.516937 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.516934 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.517536 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.517743 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.518095 4825 scope.go:117] "RemoveContainer" containerID="6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.518583 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:15 crc kubenswrapper[4825]: E0122 15:25:15.518849 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.596761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.596792 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.596801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.596815 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.596824 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.700666 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.701068 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.701093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.701122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.701148 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.803020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.803049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.803059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.803075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.803086 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.893898 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/1.log" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.897392 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.897792 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.904813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.904842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.904855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.904870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.904882 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:15Z","lastTransitionTime":"2026-01-22T15:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.918540 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.941923 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.958250 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.973819 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:15 crc kubenswrapper[4825]: I0122 15:25:15.989257 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:15Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.007831 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.008309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.008382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.008407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.008437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.008460 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.028597 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.044539 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.059143 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.073223 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.086670 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.102431 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.110142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.110178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.110189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.110204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.110215 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.116588 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.126952 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.145821 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.159755 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.175633 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.212572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.212607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.212615 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.212630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.212639 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.314850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.314885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.314893 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.314907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.314918 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.417126 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.417171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.417179 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.417194 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.417204 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.504658 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:49:48.936876047 +0000 UTC Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.519773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.519803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.519813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.519828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.519839 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.622402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.622468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.622491 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.622513 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.622529 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.724673 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.724709 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.724719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.724735 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.724747 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.827076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.827133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.827155 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.827183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.827202 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.902636 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/2.log" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.903637 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/1.log" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.906101 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" exitCode=1 Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.906175 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.906257 4825 scope.go:117] "RemoveContainer" containerID="6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.907437 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:25:16 crc kubenswrapper[4825]: E0122 15:25:16.907742 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.925023 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.929429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.929498 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.929515 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.929539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.929558 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:16Z","lastTransitionTime":"2026-01-22T15:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.943709 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b4e20dde3026bc490f1d26d3c0ff46a4689fa62b717516d1dfc0ba8093e1fce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 15:24:51.240218 6281 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 15:24:51.240535 6281 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 15:24:51.241209 6281 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 15:24:51.241235 6281 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 15:24:51.241255 6281 factory.go:656] Stopping watch factory\\\\nI0122 15:24:51.241275 6281 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 15:24:51.241284 6281 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 15:24:51.285035 6281 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 15:24:51.285068 6281 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 15:24:51.285121 6281 ovnkube.go:599] Stopped ovnkube\\\\nI0122 15:24:51.285158 6281 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 15:24:51.285236 6281 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:16Z\\\",\\\"message\\\":\\\"tes:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 15:25:16.504040 6603 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504050 6603 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504059 6603 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 15:25:16.504058 6603 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 15:25:16.504098 6603 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controlle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.953849 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.970203 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.982844 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:16 crc kubenswrapper[4825]: I0122 15:25:16.993483 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:16Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.007082 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.024532 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.032055 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.032089 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.032100 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.032117 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.032139 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.041781 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.052812 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.066400 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.077451 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.088774 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.098690 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.111416 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.125229 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.134565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.134609 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.134624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.134645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.134663 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.142551 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.236627 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.236654 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.236662 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.236677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.236685 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.339061 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.339108 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.339149 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.339168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.339179 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.441407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.441451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.441465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.441483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.441493 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.505378 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:31:01.600529049 +0000 UTC Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.516716 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.516796 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:17 crc kubenswrapper[4825]: E0122 15:25:17.516836 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.516869 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:17 crc kubenswrapper[4825]: E0122 15:25:17.516928 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.516996 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:17 crc kubenswrapper[4825]: E0122 15:25:17.517054 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:17 crc kubenswrapper[4825]: E0122 15:25:17.517095 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.543119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.543156 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.543168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.543185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.543197 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.647559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.647608 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.647630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.647659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.647677 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.750101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.750150 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.750162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.750180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.750194 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.852559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.852601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.852612 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.852629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.852641 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.910320 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/2.log" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.916825 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:25:17 crc kubenswrapper[4825]: E0122 15:25:17.916970 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.933766 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.945154 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.955283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.955312 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.955323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.955339 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.955350 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:17Z","lastTransitionTime":"2026-01-22T15:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.958698 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.969865 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:17 crc kubenswrapper[4825]: I0122 15:25:17.989641 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:16Z\\\",\\\"message\\\":\\\"tes:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 15:25:16.504040 6603 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504050 6603 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504059 6603 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 15:25:16.504058 6603 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 15:25:16.504098 6603 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controlle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:17Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.005356 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.017748 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.030944 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.043494 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.054681 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.059329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.059358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.059368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.059391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.059435 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.065596 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.074612 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.088796 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.099486 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.111185 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.123441 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.134704 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:18Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.161139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.161162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.161170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.161182 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.161190 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.263444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.263486 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.263495 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.263511 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.263520 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.366111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.366154 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.366165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.366180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.366191 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.468407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.468439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.468450 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.468466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.468478 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.506142 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:21:18.136324902 +0000 UTC Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.571232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.571278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.571286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.571304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.571314 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.673519 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.673550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.673558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.673570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.673579 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.775480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.775519 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.775527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.775541 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.775549 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.877884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.877942 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.877961 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.878030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.878059 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.980904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.980935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.980943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.980957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:18 crc kubenswrapper[4825]: I0122 15:25:18.980966 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:18Z","lastTransitionTime":"2026-01-22T15:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.082766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.082796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.082821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.082835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.082844 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.185471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.185512 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.185523 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.185538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.185548 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.287967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.288023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.288032 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.288046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.288056 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.390631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.390670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.390682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.390700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.390712 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.492763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.493083 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.493231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.493442 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.493628 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.507115 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:58:21.507714795 +0000 UTC Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.516546 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.516576 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:19 crc kubenswrapper[4825]: E0122 15:25:19.516675 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.516767 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.516832 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:19 crc kubenswrapper[4825]: E0122 15:25:19.516884 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:19 crc kubenswrapper[4825]: E0122 15:25:19.517059 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:19 crc kubenswrapper[4825]: E0122 15:25:19.517069 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.596241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.596272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.596281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.596292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.596301 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.698845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.698885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.698898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.698914 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.698926 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.801322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.801358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.801368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.801384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.801396 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.904201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.904247 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.904257 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.904272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:19 crc kubenswrapper[4825]: I0122 15:25:19.904282 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:19Z","lastTransitionTime":"2026-01-22T15:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.005913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.005970 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.006068 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.006093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.006109 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.108210 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.108258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.108267 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.108280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.108288 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.210548 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.210587 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.210599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.210614 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.210625 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.313042 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.313085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.313097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.313112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.313123 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.414962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.415023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.415033 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.415048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.415058 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.507793 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:10:13.417003506 +0000 UTC Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.517591 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.517624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.517636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.517652 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.517663 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.619909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.620088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.620104 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.620123 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.620135 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.722353 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.722408 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.722425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.722447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.722464 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.825090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.825171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.825197 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.825227 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.825266 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.927607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.927648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.927656 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.927671 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:20 crc kubenswrapper[4825]: I0122 15:25:20.927681 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:20Z","lastTransitionTime":"2026-01-22T15:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.030509 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.030545 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.030556 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.030571 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.030579 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.132767 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.132812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.132824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.132842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.132853 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.235183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.235224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.235235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.235250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.235260 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.337875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.337911 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.337921 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.337935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.337944 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.440749 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.440782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.440790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.440801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.440810 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.508651 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:03:37.291315388 +0000 UTC Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.516921 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.516953 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.516965 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:21 crc kubenswrapper[4825]: E0122 15:25:21.517212 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.517274 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:21 crc kubenswrapper[4825]: E0122 15:25:21.517360 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:21 crc kubenswrapper[4825]: E0122 15:25:21.517413 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:21 crc kubenswrapper[4825]: E0122 15:25:21.517514 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.542774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.542830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.542846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.542860 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.542871 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.644889 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.644924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.644934 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.644950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.645002 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.747197 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.747248 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.747258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.747278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.747291 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.849449 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.849498 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.849510 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.849527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.849537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.951401 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.951438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.951446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.951461 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:21 crc kubenswrapper[4825]: I0122 15:25:21.951470 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:21Z","lastTransitionTime":"2026-01-22T15:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.055037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.055082 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.055093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.055109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.055120 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.157739 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.157794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.157806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.157822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.157834 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.264484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.264533 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.264543 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.264560 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.264571 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.365926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.365992 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.366002 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.366018 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.366028 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.468522 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.468590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.468618 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.468643 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.468658 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.509443 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:21:12.001800606 +0000 UTC Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.570130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.570178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.570190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.570206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.570219 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.672795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.672852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.672866 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.672884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.672898 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.774679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.774718 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.774728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.774745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.774755 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.877402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.877438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.877446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.877458 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.877466 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.979471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.979515 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.979524 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.979538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:22 crc kubenswrapper[4825]: I0122 15:25:22.979548 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:22Z","lastTransitionTime":"2026-01-22T15:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.082406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.082465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.082483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.082507 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.082525 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.184935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.184999 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.185012 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.185029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.185040 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.288255 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.288302 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.288319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.288342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.288360 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.390511 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.390555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.390566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.390582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.390593 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.493099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.493143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.493154 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.493170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.493179 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.509569 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:10:32.524594466 +0000 UTC Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.516925 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.516960 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.516952 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.516925 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:23 crc kubenswrapper[4825]: E0122 15:25:23.517055 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:23 crc kubenswrapper[4825]: E0122 15:25:23.517130 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:23 crc kubenswrapper[4825]: E0122 15:25:23.517257 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:23 crc kubenswrapper[4825]: E0122 15:25:23.517368 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.530246 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.542535 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.557903 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.573552 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.594447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.594487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.594497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.594514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.594524 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.609220 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:16Z\\\",\\\"message\\\":\\\"tes:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 15:25:16.504040 6603 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504050 6603 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504059 6603 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 15:25:16.504058 6603 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 15:25:16.504098 6603 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controlle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.626399 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.638452 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.652042 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.666411 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.678304 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.689366 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.697042 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.697085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.697095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.697111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.697120 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.699608 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.711467 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.722674 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.735265 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.745514 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.753959 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:23Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.799584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.799610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.799618 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.799632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.799640 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.885225 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:23 crc kubenswrapper[4825]: E0122 15:25:23.885360 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:25:23 crc kubenswrapper[4825]: E0122 15:25:23.885413 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs podName:538e3056-0e80-4b71-ada6-b7440b283761 nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.885399165 +0000 UTC m=+102.646926075 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs") pod "network-metrics-daemon-hrdl8" (UID: "538e3056-0e80-4b71-ada6-b7440b283761") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.902148 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.902176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.902185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.902201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:23 crc kubenswrapper[4825]: I0122 15:25:23.902211 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:23Z","lastTransitionTime":"2026-01-22T15:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.004268 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.004291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.004301 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.004314 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.004323 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.105851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.105885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.105895 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.105908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.105918 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.208085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.208116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.208127 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.208142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.208153 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.310918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.310957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.310969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.311003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.311015 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.413602 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.413647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.413658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.413677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.413691 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.510369 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:57:16.417065575 +0000 UTC Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.516232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.516284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.516301 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.516328 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.516371 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.618696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.618740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.618749 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.618764 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.618774 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.720767 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.720797 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.720804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.720819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.720830 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.823595 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.823631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.823641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.823661 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.823672 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.926481 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.926522 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.926531 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.926546 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:24 crc kubenswrapper[4825]: I0122 15:25:24.926556 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:24Z","lastTransitionTime":"2026-01-22T15:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.029279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.029324 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.029335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.029352 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.029365 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.131617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.131663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.131674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.131691 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.131702 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.235597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.235688 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.235740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.235766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.235782 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.337514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.337550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.337559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.337573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.337584 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.440082 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.440121 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.440132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.440147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.440158 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.511205 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:13:06.815787869 +0000 UTC Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.517145 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.517198 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.517156 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.517155 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.517321 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.517391 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.517537 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.517645 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.541929 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.541969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.541994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.542011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.542022 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.550259 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.550289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.550300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.550315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.550332 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.563104 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.566872 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.566899 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.566910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.566926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.566936 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.582968 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.586107 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.586152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.586166 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.586184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.586195 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.597468 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.600775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.600907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.601029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.601149 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.601241 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.615726 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.618894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.618925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.618936 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.618950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.618961 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.632493 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: E0122 15:25:25.632649 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.643909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.643945 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.643953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.643966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.643993 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.746304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.746360 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.746379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.746401 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.746418 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.849543 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.849605 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.849617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.849634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.849645 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.937896 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ljkjt_049abb37-810d-475f-b042-bceb43e81dd5/kube-multus/0.log" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.937941 4825 generic.go:334] "Generic (PLEG): container finished" podID="049abb37-810d-475f-b042-bceb43e81dd5" containerID="529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f" exitCode=1 Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.937965 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ljkjt" event={"ID":"049abb37-810d-475f-b042-bceb43e81dd5","Type":"ContainerDied","Data":"529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.938278 4825 scope.go:117] "RemoveContainer" containerID="529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.952401 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.952450 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.952464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.952483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.952494 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:25Z","lastTransitionTime":"2026-01-22T15:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.959741 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.977803 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:25 crc kubenswrapper[4825]: I0122 15:25:25.993751 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:25Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.005819 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.016945 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.028390 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.038836 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.050072 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"2026-01-22T15:24:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9d65635e-b490-4fd4-9ef6-6a8111dc8135\\\\n2026-01-22T15:24:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9d65635e-b490-4fd4-9ef6-6a8111dc8135 to /host/opt/cni/bin/\\\\n2026-01-22T15:24:40Z [verbose] multus-daemon started\\\\n2026-01-22T15:24:40Z [verbose] Readiness Indicator file check\\\\n2026-01-22T15:25:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.054591 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.054650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.054662 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.054680 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.054693 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.061428 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.069524 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.079367 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.088038 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.098365 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.107480 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.118071 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.134340 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:16Z\\\",\\\"message\\\":\\\"tes:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 15:25:16.504040 6603 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504050 6603 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504059 6603 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 15:25:16.504058 6603 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 15:25:16.504098 6603 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controlle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.143936 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.156786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.156820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.156829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.156846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.156857 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.258685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.258713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.258721 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.258735 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.258744 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.360417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.360460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.360468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.360483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.360491 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.462578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.462615 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.462624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.462638 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.462666 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.511951 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:20:47.88704157 +0000 UTC Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.565303 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.565367 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.565381 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.565398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.565411 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.667533 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.667577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.667587 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.667605 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.667616 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.770012 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.770069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.770088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.770108 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.770123 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.872440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.872539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.872557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.872575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.872586 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.945096 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ljkjt_049abb37-810d-475f-b042-bceb43e81dd5/kube-multus/0.log" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.945222 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ljkjt" event={"ID":"049abb37-810d-475f-b042-bceb43e81dd5","Type":"ContainerStarted","Data":"f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.962475 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.974949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.975015 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.975025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.975038 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.975047 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:26Z","lastTransitionTime":"2026-01-22T15:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:26 crc kubenswrapper[4825]: I0122 15:25:26.984663 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.001789 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:26Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.014851 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.029209 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.046134 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.061081 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.077784 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.077850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.077870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.077897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.077915 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.078949 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"2026-01-22T15:24:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9d65635e-b490-4fd4-9ef6-6a8111dc8135\\\\n2026-01-22T15:24:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9d65635e-b490-4fd4-9ef6-6a8111dc8135 to /host/opt/cni/bin/\\\\n2026-01-22T15:24:40Z [verbose] multus-daemon started\\\\n2026-01-22T15:24:40Z [verbose] Readiness Indicator file check\\\\n2026-01-22T15:25:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.095652 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.109322 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.123562 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.137378 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.151628 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.166663 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.179067 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.181078 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.181142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.181193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.181225 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.181248 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.196870 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:16Z\\\",\\\"message\\\":\\\"tes:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 15:25:16.504040 6603 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504050 6603 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504059 6603 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 15:25:16.504058 6603 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 15:25:16.504098 6603 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controlle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.207607 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:27Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.284329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.284388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.284400 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.284419 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.284433 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.387114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.387158 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.387189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.387210 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.387222 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.490060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.490097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.490112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.490131 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.490142 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.512878 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:27:20.303139393 +0000 UTC Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.516336 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.516366 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.516423 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.516445 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:27 crc kubenswrapper[4825]: E0122 15:25:27.516540 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:27 crc kubenswrapper[4825]: E0122 15:25:27.516628 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:27 crc kubenswrapper[4825]: E0122 15:25:27.516748 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:27 crc kubenswrapper[4825]: E0122 15:25:27.516870 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.592048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.592086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.592095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.592110 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.592121 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.694613 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.694659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.694672 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.694691 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.694702 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.797317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.797360 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.797376 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.797392 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.797403 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.900071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.900146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.900168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.900200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:27 crc kubenswrapper[4825]: I0122 15:25:27.900222 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:27Z","lastTransitionTime":"2026-01-22T15:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.003513 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.003655 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.003740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.003808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.003833 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.106590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.106631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.106639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.106655 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.106666 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.208501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.208540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.208553 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.208575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.208590 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.311161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.311204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.311247 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.311264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.311275 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.413046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.413109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.413117 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.413130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.413138 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.513851 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:58:42.703688798 +0000 UTC Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.515388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.515413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.515425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.515448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.515463 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.517208 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:25:28 crc kubenswrapper[4825]: E0122 15:25:28.517330 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.619296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.619346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.619360 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.619378 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.619398 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.722128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.722165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.722174 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.722207 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.722218 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.824295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.824342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.824357 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.824378 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.824392 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.926975 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.927091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.927112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.927135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:28 crc kubenswrapper[4825]: I0122 15:25:28.927152 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:28Z","lastTransitionTime":"2026-01-22T15:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.028957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.029011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.029019 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.029031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.029041 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.131773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.131820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.131831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.131847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.131859 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.234907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.234960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.234975 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.235017 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.235032 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.337312 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.337379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.337389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.337402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.337411 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.439228 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.439274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.439286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.439302 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.439316 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.514620 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:51:37.730612949 +0000 UTC Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.517025 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.517038 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.517087 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.517144 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:29 crc kubenswrapper[4825]: E0122 15:25:29.517290 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:29 crc kubenswrapper[4825]: E0122 15:25:29.517619 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:29 crc kubenswrapper[4825]: E0122 15:25:29.517728 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:29 crc kubenswrapper[4825]: E0122 15:25:29.517814 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.542074 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.542112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.542120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.542132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.542144 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.644827 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.644888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.644902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.644956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.644974 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.747965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.748026 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.748039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.748055 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.748066 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.850527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.850783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.850824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.850854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.850875 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.954116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.954165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.954173 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.954191 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:29 crc kubenswrapper[4825]: I0122 15:25:29.954202 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:29Z","lastTransitionTime":"2026-01-22T15:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.057263 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.057324 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.057334 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.057351 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.057363 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.159713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.159763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.159776 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.159795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.159808 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.262954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.263037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.263054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.263075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.263089 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.365931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.366035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.366060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.366094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.366125 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.469050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.469098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.469110 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.469130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.469146 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.515530 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:43:02.462532252 +0000 UTC Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.525619 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.571472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.571524 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.571534 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.571551 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.571562 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.673969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.674047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.674073 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.674095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.674110 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.776788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.776828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.776839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.776856 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.776867 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.879071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.879112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.879120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.879133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.879144 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.981285 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.981337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.981348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.981364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:30 crc kubenswrapper[4825]: I0122 15:25:30.981375 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:30Z","lastTransitionTime":"2026-01-22T15:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.083323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.083394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.083425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.083456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.083480 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.187039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.187086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.187096 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.187112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.187123 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.290138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.290188 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.290203 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.290223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.290239 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.393742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.393779 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.393791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.393808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.393822 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.496675 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.496722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.496736 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.496756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.496791 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.516381 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:39:08.480245952 +0000 UTC Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.516561 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:31 crc kubenswrapper[4825]: E0122 15:25:31.516677 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.516745 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.516928 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:31 crc kubenswrapper[4825]: E0122 15:25:31.516920 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.516972 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:31 crc kubenswrapper[4825]: E0122 15:25:31.517075 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:31 crc kubenswrapper[4825]: E0122 15:25:31.517132 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.599057 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.599108 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.599119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.599137 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.599152 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.702054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.702131 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.702146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.702168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.702184 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.804828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.804878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.804890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.804911 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.804926 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.907748 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.907816 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.907839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.907871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:31 crc kubenswrapper[4825]: I0122 15:25:31.907892 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:31Z","lastTransitionTime":"2026-01-22T15:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.010567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.010631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.010650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.010674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.010691 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.113311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.113360 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.113376 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.113398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.113412 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.215765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.215799 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.215808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.215824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.215832 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.318093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.318150 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.318163 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.318184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.318196 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.420976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.421037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.421047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.421060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.421069 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.516845 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:18:47.490943094 +0000 UTC Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.524730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.524794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.524805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.524822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.524834 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.627940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.628021 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.628040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.628063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.628079 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.730974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.731044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.731058 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.731077 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.731092 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.834399 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.834467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.834491 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.834517 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.834534 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.937700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.937781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.937805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.937834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:32 crc kubenswrapper[4825]: I0122 15:25:32.937869 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:32Z","lastTransitionTime":"2026-01-22T15:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.040708 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.040783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.040806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.040834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.040857 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.143714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.143794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.143817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.143846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.143872 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.246878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.246914 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.246924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.246941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.246971 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.350417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.350469 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.350479 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.350496 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.350508 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.453583 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.453704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.454538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.454581 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.454612 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.516995 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:33 crc kubenswrapper[4825]: E0122 15:25:33.517134 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.517333 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:33 crc kubenswrapper[4825]: E0122 15:25:33.517400 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.517456 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.517521 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:33 crc kubenswrapper[4825]: E0122 15:25:33.517539 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:33 crc kubenswrapper[4825]: E0122 15:25:33.517689 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.517287 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:28:47.172833922 +0000 UTC Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.529918 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2555f50772bbda4e123fdec83e423f7d626c731827c35e05295289a807e73948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://516f32f26269397fa805a21ec38d9d4b068df8d54a06d5e1aeb51e816bcd05af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.543180 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8jk65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf928ed7-f98c-4ced-b3d7-cb4700d3a906\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e850152d2593d27abaec8047c78f0bf299831a5dfdc71222c0d7f614edab1fc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vt2ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8jk65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.557398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.557431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.557445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.557463 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.557476 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.557595 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"538e3056-0e80-4b71-ada6-b7440b283761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r29dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.571602 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"969fd89f-29f7-421b-89de-a1d38e944869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c3fdedb467b8b1788321723365713bab9c0cad404c56cee6dbf32d4d9bf2c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c1d84c16dced7bc950479100fb3a934b5522d4ede9f73a3bfc5f084bbc0f853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0877de75267dd3219c7d77a784f896f75f5c4aafdd4fedda14f49d858064ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7bd1405bc592021fa0cf6bd6b9347ef4917bf2083a8008655d85a9535e38346\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.592452 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e0b252c-291b-4c92-9f1a-f10e9026fcb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 15:24:25.938290 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 15:24:25.940653 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1604083302/tls.crt::/tmp/serving-cert-1604083302/tls.key\\\\\\\"\\\\nI0122 15:24:31.472710 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 15:24:31.475148 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 15:24:31.475170 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 15:24:31.475190 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 15:24:31.475196 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 15:24:31.479823 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 15:24:31.479868 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 15:24:31.479888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 15:24:31.479896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 15:24:31.479902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 15:24:31.479908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 15:24:31.479838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 15:24:31.480796 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.610144 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.624899 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k59vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c42dacaf-0842-4484-8d2d-4b36805194be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a618d5072f8d23103f06a8dd05cba35a845a3a62e84414dc761a522d80b7534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2dwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k59vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.643367 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d06480efc26ea575533400da7f30e544d019a29eaaa44d70ccffd62aa0384d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.655122 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://468cc4d4bec0beb222ae4b0aa68ca278ef0202d2442e1eaa7c65521b9a32972e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.659468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.659545 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.659559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.659577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.659588 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.668791 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ljkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049abb37-810d-475f-b042-bceb43e81dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:25Z\\\",\\\"message\\\":\\\"2026-01-22T15:24:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9d65635e-b490-4fd4-9ef6-6a8111dc8135\\\\n2026-01-22T15:24:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9d65635e-b490-4fd4-9ef6-6a8111dc8135 to /host/opt/cni/bin/\\\\n2026-01-22T15:24:40Z [verbose] multus-daemon started\\\\n2026-01-22T15:24:40Z [verbose] Readiness Indicator file check\\\\n2026-01-22T15:25:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ldrdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ljkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.678851 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d6015ae-d193-4854-9861-dc4384510fdb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f6154b25c07d9722644879d54fef5a3364569718c5556779c67bb18a5a0b8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xq8gg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k9wpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.692554 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26d4b264-ee2f-41e1-a123-78320ccfca87\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba51f373199c7d627b06f399d55d404a64162b68f47718fb31deed3debef219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc9875452700c5e88534d543798c399ea41804e612b2a39d9d9f162ccd767b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16454fc0242f6f74f73258225ccc2f8efe76d13ddc8e14bfc2630a37625aecfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.703932 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.718966 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85f26f27-4ca0-42df-a11b-fa27e42eb3c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df00c8ba6b5cbf7b2512875703b6873a0d49edbd545551ac143bee72418494b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e55c4a56e1e9dc1fca50216f661bab6465216b77faa0beab36e184e25f470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8699b004a27afd1e6f2bfa7ec94809782ae3c73928d800f4015e44fd3b35fef5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda74b0117aa13e5fc4434908e77d0667ba03fa0816ebf8a25afa225f6654ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f557db28ef7c4b806872398cc66597f045ca66e403454da90a9a289a63b95e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://740c4ccbdb8cb84130a9da1b738769d2b9117dcbb55a9d8a95533a2ba72c0c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd447515a1fefd6e021a9f579b383aaae0cf5b58ba156239b84159c3334fb04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5bzgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.730520 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b53fb2c-aec7-4685-b3ae-a7e3fc9c5945\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c7bd4f05b8362d0e74900120afee1ec61a6cc125af950b4e7d4906836ad9f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edbd73cf546d17782f4c06dfbe6084c22ace44d3a6cdf01039d7b4473c771db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbd73cf546d17782f4c06dfbe6084c22ace44d3a6cdf01039d7b4473c771db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.745200 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.762044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.762078 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.762086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.762100 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.762111 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.771922 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a796f1-0c22-4a59-a525-e426ecf221bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T15:25:16Z\\\",\\\"message\\\":\\\"tes:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 15:25:16.504040 6603 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504050 6603 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0122 15:25:16.504059 6603 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 15:25:16.504058 6603 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 15:25:16.504098 6603 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controlle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T15:25:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c8f2b_openshift-ovn-kubernetes(a2a796f1-0c22-4a59-a525-e426ecf221bc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T15:24:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T15:24:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mm2tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c8f2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.786369 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17f70b06-0bde-412f-954f-fcfa00e88b6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T15:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f5c3b2b5365400fac811b64160f64773868d3cf378e14ae72bd4f526a40b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6581cfe7742019e93e6ee5ec84f6ca535db9c4f4bc8c4240a4642ebe498a1e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T15:24:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs8wn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T15:24:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m4zbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:33Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.868781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.868821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.868833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.868851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.868862 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.970637 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.970674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.970683 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.970698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:33 crc kubenswrapper[4825]: I0122 15:25:33.970708 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:33Z","lastTransitionTime":"2026-01-22T15:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.073964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.074066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.074089 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.074112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.074131 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.177491 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.177528 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.177538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.177556 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.177568 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.280818 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.280877 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.280892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.280915 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.280931 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.383747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.383803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.383820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.383835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.383845 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.487407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.487453 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.487463 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.487480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.487491 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.518910 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:11:05.756566918 +0000 UTC Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.590844 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.590924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.590947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.591024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.591050 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.694270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.694330 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.694349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.694373 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.694392 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.796894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.796965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.797025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.797071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.797110 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.899810 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.899861 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.899877 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.899902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:34 crc kubenswrapper[4825]: I0122 15:25:34.899922 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:34Z","lastTransitionTime":"2026-01-22T15:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.003211 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.003289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.003306 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.003351 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.003365 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.106776 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.106829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.106847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.106872 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.106890 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.210126 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.210201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.210226 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.210256 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.210280 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.313593 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.313655 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.313678 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.313709 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.313729 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.409817 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.410039 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:39.409961156 +0000 UTC m=+146.171488106 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.417168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.417218 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.417235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.417259 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.417277 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.511403 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.511505 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.511561 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.511596 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511614 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511670 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511696 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511717 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511719 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511801 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 15:26:39.511768576 +0000 UTC m=+146.273295556 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511799 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511842 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:26:39.511822497 +0000 UTC m=+146.273349597 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511853 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511871 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 15:26:39.511855798 +0000 UTC m=+146.273382828 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511876 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.511962 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 15:26:39.511942061 +0000 UTC m=+146.273469051 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.516564 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.516600 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.516714 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.516908 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.516884 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.517069 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.517191 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.517311 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.519099 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:20:42.367787246 +0000 UTC Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.519590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.519695 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.519723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.519751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.519776 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.622467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.622505 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.622514 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.622530 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.622539 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.726029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.726313 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.726331 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.726350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.726361 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.760296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.760358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.760375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.760400 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.760417 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.780596 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.785043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.785075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.785084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.785098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.785107 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.804251 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.807644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.807688 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.807699 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.807726 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.807739 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.824730 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.828457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.828489 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.828499 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.828515 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.828528 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.843597 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.846852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.846938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.846956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.847006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.847026 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.861751 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T15:25:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63828c1b-c3c3-4e3c-af40-4df88d9bdc0c\\\",\\\"systemUUID\\\":\\\"8d0c9c57-c027-4cfc-93dd-2f319dfeea10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T15:25:35Z is after 2025-08-24T17:21:41Z" Jan 22 15:25:35 crc kubenswrapper[4825]: E0122 15:25:35.861919 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.863517 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.863558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.863572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.863590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.863603 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.966405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.966464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.966480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.966504 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:35 crc kubenswrapper[4825]: I0122 15:25:35.966522 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:35Z","lastTransitionTime":"2026-01-22T15:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.069874 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.069934 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.069952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.069975 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.070018 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.173728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.173829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.173859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.173944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.174037 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.276801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.276835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.276845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.276863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.276875 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.379639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.379689 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.379705 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.379724 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.379737 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.490542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.490590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.490604 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.490623 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.490636 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.519618 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:24:16.874984665 +0000 UTC Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.592831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.592882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.592893 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.592911 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.592923 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.695895 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.695950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.695962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.696228 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.696254 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.798252 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.798296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.798304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.798318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.798327 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.902202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.902250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.902260 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.902276 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:36 crc kubenswrapper[4825]: I0122 15:25:36.902285 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:36Z","lastTransitionTime":"2026-01-22T15:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.004554 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.004592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.004601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.004618 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.004627 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.107841 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.107939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.108274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.108663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.108739 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.213076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.213141 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.213161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.213189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.213212 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.316322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.316403 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.316428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.316462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.316484 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.419591 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.419645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.419662 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.419685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.419701 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.516751 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.516809 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:37 crc kubenswrapper[4825]: E0122 15:25:37.516890 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.516751 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:37 crc kubenswrapper[4825]: E0122 15:25:37.516973 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:37 crc kubenswrapper[4825]: E0122 15:25:37.517067 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.517266 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:37 crc kubenswrapper[4825]: E0122 15:25:37.517445 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.519785 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:55:14.029566547 +0000 UTC Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.522334 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.522387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.522405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.522429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.522447 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.625404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.625445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.625456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.625484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.625495 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.727968 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.728025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.728036 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.728053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.728064 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.830739 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.830765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.830773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.830785 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.830793 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.933575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.933716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.933745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.933772 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:37 crc kubenswrapper[4825]: I0122 15:25:37.933796 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:37Z","lastTransitionTime":"2026-01-22T15:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.038269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.038528 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.038541 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.038564 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.038578 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.141123 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.141175 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.141186 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.141206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.141221 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.244520 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.244558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.244568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.244582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.244591 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.348379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.348438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.348454 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.348479 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.348497 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.451285 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.451347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.451366 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.451389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.451405 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.520103 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:49:08.713799123 +0000 UTC Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.553858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.553917 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.553928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.553943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.553951 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.657828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.657941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.657960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.658043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.658064 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.760897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.760956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.760967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.761001 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.761014 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.864809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.864863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.864884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.864910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.864927 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.967898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.967947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.967955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.967969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:38 crc kubenswrapper[4825]: I0122 15:25:38.967995 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:38Z","lastTransitionTime":"2026-01-22T15:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.070527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.070842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.070857 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.070879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.070894 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.173853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.173914 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.173926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.173944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.173956 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.277688 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.277742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.277761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.277785 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.277803 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.380492 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.380535 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.380547 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.380565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.380575 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.483466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.483509 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.483553 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.483577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.483594 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.516055 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.516065 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.516076 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.516138 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:39 crc kubenswrapper[4825]: E0122 15:25:39.516665 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:39 crc kubenswrapper[4825]: E0122 15:25:39.516855 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:39 crc kubenswrapper[4825]: E0122 15:25:39.516965 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:39 crc kubenswrapper[4825]: E0122 15:25:39.517138 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.521077 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:21:36.282635558 +0000 UTC Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.536829 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.587556 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.587599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.587612 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.587630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.587645 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.690754 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.690809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.690828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.690854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.690873 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.794352 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.794414 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.794426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.794444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.794457 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.897454 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.897504 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.897519 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.897542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.897558 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.999693 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.999725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.999732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.999745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:39 crc kubenswrapper[4825]: I0122 15:25:39.999754 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:39Z","lastTransitionTime":"2026-01-22T15:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.102438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.102492 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.102507 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.102523 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.102537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.204919 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.204961 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.204971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.205005 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.205020 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.307570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.307651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.307699 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.307736 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.307762 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.410716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.410794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.410817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.410846 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.410867 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.514265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.514304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.514313 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.514329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.514338 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.521273 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:28:29.53044666 +0000 UTC Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.616394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.616446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.616460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.616477 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.616491 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.719009 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.719051 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.719066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.719088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.719101 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.821172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.821229 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.821251 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.821272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.821286 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.923651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.923711 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.923728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.923751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:40 crc kubenswrapper[4825]: I0122 15:25:40.923784 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:40Z","lastTransitionTime":"2026-01-22T15:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.026526 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.026581 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.026592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.026610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.026625 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.129212 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.129270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.129283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.129298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.129309 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.232801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.232865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.232882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.232907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.232924 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.335023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.335062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.335073 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.335090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.335101 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.437852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.438294 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.438523 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.438716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.438895 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.516065 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.516106 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.516138 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:41 crc kubenswrapper[4825]: E0122 15:25:41.516184 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.516072 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:41 crc kubenswrapper[4825]: E0122 15:25:41.516290 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:41 crc kubenswrapper[4825]: E0122 15:25:41.516553 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:41 crc kubenswrapper[4825]: E0122 15:25:41.516790 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.522180 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:18:34.375850687 +0000 UTC Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.542333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.542666 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.542877 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.543063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.543226 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.646626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.646696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.646721 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.646750 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.646771 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.750068 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.750126 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.750146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.750170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.750187 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.853687 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.853817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.853842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.853874 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.853910 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.956642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.956675 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.956684 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.956696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:41 crc kubenswrapper[4825]: I0122 15:25:41.956709 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:41Z","lastTransitionTime":"2026-01-22T15:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.058908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.058947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.058959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.058994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.059007 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.162209 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.162274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.162292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.162318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.162337 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.265337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.265410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.265428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.265449 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.265464 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.368106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.368160 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.368176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.368195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.368211 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.471813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.471872 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.471888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.471912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.471929 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.522454 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:55:30.870141411 +0000 UTC Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.575086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.575159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.575183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.575206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.575223 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.677480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.677550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.677573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.677638 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.677660 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.780939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.781003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.781015 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.781031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.781044 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.883957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.884050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.884066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.884091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.884108 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.987084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.987629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.987787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.987952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:42 crc kubenswrapper[4825]: I0122 15:25:42.988160 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:42Z","lastTransitionTime":"2026-01-22T15:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.090946 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.091099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.091134 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.091169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.091193 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.194431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.194505 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.194527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.194555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.194577 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.296973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.297405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.297830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.298053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.298221 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.401254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.401299 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.401311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.401328 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.401339 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.503898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.503955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.503972 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.504020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.504042 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.516810 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.516881 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.517044 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:43 crc kubenswrapper[4825]: E0122 15:25:43.517054 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.517077 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:43 crc kubenswrapper[4825]: E0122 15:25:43.517201 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:43 crc kubenswrapper[4825]: E0122 15:25:43.517369 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:43 crc kubenswrapper[4825]: E0122 15:25:43.518918 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.519297 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.522675 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:10:04.145636206 +0000 UTC Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.576411 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.576384266 podStartE2EDuration="4.576384266s" podCreationTimestamp="2026-01-22 15:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.574031938 +0000 UTC m=+90.335558888" watchObservedRunningTime="2026-01-22 15:25:43.576384266 +0000 UTC m=+90.337911216" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.606240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.606286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.606302 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.606323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.606336 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.618688 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.618667304 podStartE2EDuration="1m12.618667304s" podCreationTimestamp="2026-01-22 15:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.598019254 +0000 UTC m=+90.359546174" watchObservedRunningTime="2026-01-22 15:25:43.618667304 +0000 UTC m=+90.380194224" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.664787 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8jk65" podStartSLOduration=66.664768652 podStartE2EDuration="1m6.664768652s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.651489733 +0000 UTC m=+90.413016653" watchObservedRunningTime="2026-01-22 15:25:43.664768652 +0000 UTC m=+90.426295562" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.675588 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.675570771 podStartE2EDuration="36.675570771s" podCreationTimestamp="2026-01-22 15:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.674897832 +0000 UTC m=+90.436424752" watchObservedRunningTime="2026-01-22 15:25:43.675570771 +0000 UTC m=+90.437097681" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.699842 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ljkjt" podStartSLOduration=66.699824904 podStartE2EDuration="1m6.699824904s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.699192416 +0000 UTC m=+90.460719326" watchObservedRunningTime="2026-01-22 15:25:43.699824904 +0000 UTC m=+90.461351814" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.708103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.708133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.708144 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.708160 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.708170 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.712267 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podStartSLOduration=66.712248449 podStartE2EDuration="1m6.712248449s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.712037673 +0000 UTC m=+90.473564573" watchObservedRunningTime="2026-01-22 15:25:43.712248449 +0000 UTC m=+90.473775359" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.742255 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-k59vq" podStartSLOduration=66.742236027 podStartE2EDuration="1m6.742236027s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.725800717 +0000 UTC m=+90.487327627" watchObservedRunningTime="2026-01-22 15:25:43.742236027 +0000 UTC m=+90.503762937" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.785747 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5bzgc" podStartSLOduration=66.78572795 podStartE2EDuration="1m6.78572795s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.784722951 +0000 UTC m=+90.546249861" watchObservedRunningTime="2026-01-22 15:25:43.78572795 +0000 UTC m=+90.547254870" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.799308 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.799285017 podStartE2EDuration="1m12.799285017s" podCreationTimestamp="2026-01-22 15:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.798801284 +0000 UTC m=+90.560328224" watchObservedRunningTime="2026-01-22 15:25:43.799285017 +0000 UTC m=+90.560811937" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.811050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.811090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.811099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.811118 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.811129 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.857997 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m4zbf" podStartSLOduration=65.857950194 podStartE2EDuration="1m5.857950194s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.857939554 +0000 UTC m=+90.619466594" watchObservedRunningTime="2026-01-22 15:25:43.857950194 +0000 UTC m=+90.619477104" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.868090 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.868074744 podStartE2EDuration="13.868074744s" podCreationTimestamp="2026-01-22 15:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:43.867492147 +0000 UTC m=+90.629019047" watchObservedRunningTime="2026-01-22 15:25:43.868074744 +0000 UTC m=+90.629601654" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.913963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.914029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.914040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.914057 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:43 crc kubenswrapper[4825]: I0122 15:25:43.914066 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:43Z","lastTransitionTime":"2026-01-22T15:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.006416 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/2.log" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.008597 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerStarted","Data":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.008930 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.016089 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.016121 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.016130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.016143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.016154 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.118668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.118705 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.118715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.118730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.118740 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.221144 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.221182 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.221193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.221212 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.221224 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.318361 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podStartSLOduration=67.318338394 podStartE2EDuration="1m7.318338394s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:44.033115802 +0000 UTC m=+90.794642722" watchObservedRunningTime="2026-01-22 15:25:44.318338394 +0000 UTC m=+91.079865304" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.319413 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hrdl8"] Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.319544 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:44 crc kubenswrapper[4825]: E0122 15:25:44.319830 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.325463 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.325504 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.325515 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.325532 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.325544 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.427829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.427867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.427876 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.427888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.427896 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.523832 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:31:02.844286219 +0000 UTC Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.530576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.530616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.530627 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.530646 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.530657 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.632825 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.632861 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.632871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.632885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.632893 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.735595 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.735630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.735638 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.735653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.735661 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.837769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.837805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.837817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.837834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.837846 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.940893 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.940961 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.941003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.941027 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:44 crc kubenswrapper[4825]: I0122 15:25:44.941044 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:44Z","lastTransitionTime":"2026-01-22T15:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.043892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.043952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.043968 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.044028 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.044052 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.147410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.147483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.147518 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.147561 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.147586 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.250359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.250401 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.250412 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.250426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.250437 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.353418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.353525 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.353545 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.353571 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.353589 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.456691 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.456730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.456741 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.456755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.456765 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.516793 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:45 crc kubenswrapper[4825]: E0122 15:25:45.516913 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.516955 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:45 crc kubenswrapper[4825]: E0122 15:25:45.517007 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.518076 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:45 crc kubenswrapper[4825]: E0122 15:25:45.518235 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.523936 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:09:54.371590873 +0000 UTC Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.559386 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.559415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.559428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.559443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.559455 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.661711 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.661749 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.661766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.661788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.661804 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.763753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.763778 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.763786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.763799 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.763807 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.872193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.872235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.872244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.872258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.872270 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.974361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.974398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.974410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.974428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:45 crc kubenswrapper[4825]: I0122 15:25:45.974440 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:45Z","lastTransitionTime":"2026-01-22T15:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.076273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.076314 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.076325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.076341 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.076352 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:46Z","lastTransitionTime":"2026-01-22T15:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.166484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.166540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.166558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.166579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.166594 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T15:25:46Z","lastTransitionTime":"2026-01-22T15:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.219756 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.220117 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.222155 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.222387 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.222499 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.222837 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.325409 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f0ede596-947d-4382-ac5a-45121bf9399d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.325458 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f0ede596-947d-4382-ac5a-45121bf9399d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.325482 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0ede596-947d-4382-ac5a-45121bf9399d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.325500 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ede596-947d-4382-ac5a-45121bf9399d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.325538 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f0ede596-947d-4382-ac5a-45121bf9399d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.426835 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f0ede596-947d-4382-ac5a-45121bf9399d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.426908 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f0ede596-947d-4382-ac5a-45121bf9399d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.426934 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f0ede596-947d-4382-ac5a-45121bf9399d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.426966 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0ede596-947d-4382-ac5a-45121bf9399d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.426991 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f0ede596-947d-4382-ac5a-45121bf9399d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.427088 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f0ede596-947d-4382-ac5a-45121bf9399d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.427638 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ede596-947d-4382-ac5a-45121bf9399d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.427958 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f0ede596-947d-4382-ac5a-45121bf9399d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.436037 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ede596-947d-4382-ac5a-45121bf9399d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.443778 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0ede596-947d-4382-ac5a-45121bf9399d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qkbmc\" (UID: \"f0ede596-947d-4382-ac5a-45121bf9399d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.515881 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:46 crc kubenswrapper[4825]: E0122 15:25:46.516251 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrdl8" podUID="538e3056-0e80-4b71-ada6-b7440b283761" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.524572 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:43:15.422591415 +0000 UTC Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.524619 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.532455 4825 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.578594 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" Jan 22 15:25:46 crc kubenswrapper[4825]: W0122 15:25:46.599827 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0ede596_947d_4382_ac5a_45121bf9399d.slice/crio-f841749aee47db629bb144fa52356c7b52d981dc6b1d9330014cfc59ac8105c3 WatchSource:0}: Error finding container f841749aee47db629bb144fa52356c7b52d981dc6b1d9330014cfc59ac8105c3: Status 404 returned error can't find the container with id f841749aee47db629bb144fa52356c7b52d981dc6b1d9330014cfc59ac8105c3 Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.702116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.702264 4825 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.741750 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-82rs5"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.742305 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.743190 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-f7rxw"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.743534 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.747087 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-88zfp"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.747131 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.747375 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-phwjz"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.747565 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.747775 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7pg5"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.748696 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.748879 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.749074 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.749233 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.749461 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.751126 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.754960 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fwkrq"] Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.755813 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.762308 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.763030 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 15:25:46 crc kubenswrapper[4825]: I0122 15:25:46.763367 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.081265 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.081483 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.081888 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082187 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082321 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082319 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082349 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082359 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082435 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.082505 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.083288 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f22rt"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.084057 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.086122 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-etcd-client\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.087406 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspg9\" (UniqueName: \"kubernetes.io/projected/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-kube-api-access-cspg9\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.087642 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-config\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088050 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088111 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-etcd-serving-ca\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088165 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-audit\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088219 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-serving-cert\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088254 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-image-import-ca\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088285 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-node-pullsecrets\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088312 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-encryption-config\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.088365 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-audit-dir\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.091610 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.092668 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.094475 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.097041 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.097518 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" event={"ID":"f0ede596-947d-4382-ac5a-45121bf9399d","Type":"ContainerStarted","Data":"f841749aee47db629bb144fa52356c7b52d981dc6b1d9330014cfc59ac8105c3"} Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.098155 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.098223 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.098258 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.098429 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.098818 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.099337 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.099420 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.099639 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.109429 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.110926 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.112767 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.112889 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.113390 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.113828 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.114931 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.115663 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.116458 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.116677 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.121094 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.122215 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qvds8"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.122479 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.122827 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.124660 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.124794 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.126211 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.130673 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.131391 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.132181 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.132569 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.132637 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.132763 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.133549 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.133706 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.133827 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.133946 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134125 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134245 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134333 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134412 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134467 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134560 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.134817 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135074 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135129 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135173 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135133 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135455 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135465 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135670 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135814 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135883 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135905 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.135817 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.136503 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.139288 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.139448 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.139728 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.139941 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.140071 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.141382 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.142487 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.174142 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.174845 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.175459 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.176965 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.181776 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191755 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191802 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f90b820-57dd-4be0-9648-de26783bc914-audit-dir\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191832 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-config\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191860 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-audit-policies\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191894 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-serving-cert\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191923 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191948 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-client-ca\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.191974 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vpfb7\" (UID: \"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192017 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192043 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rzjb\" (UniqueName: \"kubernetes.io/projected/5542df19-2024-4e82-a6b4-ba27c678a6f3-kube-api-access-9rzjb\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192073 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-node-pullsecrets\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192106 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-encryption-config\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192142 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-client-ca\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192167 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72257f30-9f17-4974-aeec-0755be040824-serving-cert\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192237 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b047505-d780-4596-86a8-92c7a3e8a07c-serving-cert\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192261 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfhz\" (UniqueName: \"kubernetes.io/projected/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-kube-api-access-mwfhz\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192289 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192332 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspg9\" (UniqueName: \"kubernetes.io/projected/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-kube-api-access-cspg9\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192394 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192434 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-config\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192473 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4t2j\" (UniqueName: \"kubernetes.io/projected/7b047505-d780-4596-86a8-92c7a3e8a07c-kube-api-access-f4t2j\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192505 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07701433-aa2e-4b7a-a542-a1c4ecd5135e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192541 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192579 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-encryption-config\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192608 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94gg\" (UniqueName: \"kubernetes.io/projected/ba2eb0b7-43ae-49a7-9a19-c969039de168-kube-api-access-j94gg\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192668 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-audit\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192697 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192721 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192755 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-images\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192794 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba2eb0b7-43ae-49a7-9a19-c969039de168-audit-dir\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.192896 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtt4\" (UniqueName: \"kubernetes.io/projected/72257f30-9f17-4974-aeec-0755be040824-kube-api-access-cwtt4\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193024 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193060 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swfz8\" (UniqueName: \"kubernetes.io/projected/5b874959-d450-49f1-ab62-1852a45fc258-kube-api-access-swfz8\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193088 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57fb87b-8cec-4c88-a802-69631aef1a2e-serving-cert\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193128 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-etcd-client\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193211 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-config\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193255 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1667d84-12f5-4cb0-9a46-f69c25bea89d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193291 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b874959-d450-49f1-ab62-1852a45fc258-config\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193315 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b874959-d450-49f1-ab62-1852a45fc258-serving-cert\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193346 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-image-import-ca\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193394 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193433 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-audit-policies\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193498 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cw5n\" (UniqueName: \"kubernetes.io/projected/07701433-aa2e-4b7a-a542-a1c4ecd5135e-kube-api-access-8cw5n\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193527 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193556 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgmq\" (UniqueName: \"kubernetes.io/projected/3f90b820-57dd-4be0-9648-de26783bc914-kube-api-access-cwgmq\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193579 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vgv\" (UniqueName: \"kubernetes.io/projected/e57fb87b-8cec-4c88-a802-69631aef1a2e-kube-api-access-k9vgv\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193606 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-serving-cert\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193638 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvlh\" (UniqueName: \"kubernetes.io/projected/c1667d84-12f5-4cb0-9a46-f69c25bea89d-kube-api-access-gwvlh\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193678 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1667d84-12f5-4cb0-9a46-f69c25bea89d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193714 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-audit-dir\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193745 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b874959-d450-49f1-ab62-1852a45fc258-trusted-ca\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193775 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-etcd-client\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193801 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193872 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193900 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193928 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.193963 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-config\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194020 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-service-ca-bundle\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194053 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1667d84-12f5-4cb0-9a46-f69c25bea89d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194079 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-etcd-serving-ca\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194107 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194134 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjnn\" (UniqueName: \"kubernetes.io/projected/bb8c16fb-b627-4b4d-8c02-5f9537eea746-kube-api-access-8kjnn\") pod \"downloads-7954f5f757-f7rxw\" (UID: \"bb8c16fb-b627-4b4d-8c02-5f9537eea746\") " pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194167 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszf8\" (UniqueName: \"kubernetes.io/projected/247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d-kube-api-access-vszf8\") pod \"cluster-samples-operator-665b6dd947-vpfb7\" (UID: \"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194208 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07701433-aa2e-4b7a-a542-a1c4ecd5135e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194230 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-config\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194259 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5542df19-2024-4e82-a6b4-ba27c678a6f3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194289 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.194314 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5542df19-2024-4e82-a6b4-ba27c678a6f3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.206991 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.207037 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.207463 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208044 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208096 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f7rxw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208209 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208250 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208287 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208497 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208568 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.208668 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-node-pullsecrets\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.209453 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.210143 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.210241 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.210516 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.210839 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211096 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211231 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211251 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-audit\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211270 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-82rs5"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211281 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-88zfp"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211446 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.211736 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.213242 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.213555 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-image-import-ca\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.214358 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-config\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.214491 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.214629 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-audit-dir\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.219609 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.220062 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-serving-cert\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.220707 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.220802 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-encryption-config\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.221557 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.221686 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.221764 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.222803 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223078 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223119 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223183 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223288 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223365 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223443 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223628 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.223800 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.224177 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.224368 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-etcd-serving-ca\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.225316 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7pg5"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.225934 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.227064 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f22rt"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.229012 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.233892 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qvds8"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.235156 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspg9\" (UniqueName: \"kubernetes.io/projected/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-kube-api-access-cspg9\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.235371 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.236433 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-phwjz"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.268104 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aeb10bb8-1d41-433a-8f08-2edf3eefaa7c-etcd-client\") pod \"apiserver-76f77b778f-phwjz\" (UID: \"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c\") " pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.268937 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.269880 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.271425 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fwkrq"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.271728 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.272632 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9s9f"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.273955 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7t586"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.275092 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.275389 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.276526 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.277256 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.277631 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t8cfv"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.278659 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.279465 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.281346 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.283168 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.283413 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.284815 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8g6lv"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.287433 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.289190 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.289537 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.289673 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.289746 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.289847 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.290114 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.290369 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.290548 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.290698 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.291367 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.291429 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.291454 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.291560 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.291768 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.292508 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.292823 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.293116 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.294684 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7hzt4"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295206 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8npv4\" (UniqueName: \"kubernetes.io/projected/81d43c37-4152-47d0-be95-a390693902e9-kube-api-access-8npv4\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295250 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swfz8\" (UniqueName: \"kubernetes.io/projected/5b874959-d450-49f1-ab62-1852a45fc258-kube-api-access-swfz8\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295270 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57fb87b-8cec-4c88-a802-69631aef1a2e-serving-cert\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295280 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295291 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-etcd-client\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295309 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-config\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295323 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b874959-d450-49f1-ab62-1852a45fc258-config\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295337 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b874959-d450-49f1-ab62-1852a45fc258-serving-cert\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295351 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1667d84-12f5-4cb0-9a46-f69c25bea89d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295369 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295383 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-audit-policies\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295407 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295442 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgmq\" (UniqueName: \"kubernetes.io/projected/3f90b820-57dd-4be0-9648-de26783bc914-kube-api-access-cwgmq\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295456 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9vgv\" (UniqueName: \"kubernetes.io/projected/e57fb87b-8cec-4c88-a802-69631aef1a2e-kube-api-access-k9vgv\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295469 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-serving-cert\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295489 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cw5n\" (UniqueName: \"kubernetes.io/projected/07701433-aa2e-4b7a-a542-a1c4ecd5135e-kube-api-access-8cw5n\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295503 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-service-ca\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295520 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1667d84-12f5-4cb0-9a46-f69c25bea89d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295536 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvlh\" (UniqueName: \"kubernetes.io/projected/c1667d84-12f5-4cb0-9a46-f69c25bea89d-kube-api-access-gwvlh\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295554 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b874959-d450-49f1-ab62-1852a45fc258-trusted-ca\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295574 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295604 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295620 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295634 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-config\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295650 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-service-ca-bundle\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295666 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1667d84-12f5-4cb0-9a46-f69c25bea89d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295701 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295716 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-config\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295733 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjnn\" (UniqueName: \"kubernetes.io/projected/bb8c16fb-b627-4b4d-8c02-5f9537eea746-kube-api-access-8kjnn\") pod \"downloads-7954f5f757-f7rxw\" (UID: \"bb8c16fb-b627-4b4d-8c02-5f9537eea746\") " pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295749 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vszf8\" (UniqueName: \"kubernetes.io/projected/247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d-kube-api-access-vszf8\") pod \"cluster-samples-operator-665b6dd947-vpfb7\" (UID: \"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295763 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07701433-aa2e-4b7a-a542-a1c4ecd5135e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295779 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295795 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5542df19-2024-4e82-a6b4-ba27c678a6f3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295813 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5542df19-2024-4e82-a6b4-ba27c678a6f3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295828 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f90b820-57dd-4be0-9648-de26783bc914-audit-dir\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295844 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295863 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-config\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295877 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-audit-policies\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295892 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295906 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-client-ca\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295926 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vpfb7\" (UID: \"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295942 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295957 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-serving-cert\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.295974 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rzjb\" (UniqueName: \"kubernetes.io/projected/5542df19-2024-4e82-a6b4-ba27c678a6f3-kube-api-access-9rzjb\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296006 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-client-ca\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296026 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-oauth-serving-cert\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296048 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b047505-d780-4596-86a8-92c7a3e8a07c-serving-cert\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296068 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwfhz\" (UniqueName: \"kubernetes.io/projected/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-kube-api-access-mwfhz\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296083 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72257f30-9f17-4974-aeec-0755be040824-serving-cert\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296102 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-trusted-ca-bundle\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296137 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296154 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296169 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4t2j\" (UniqueName: \"kubernetes.io/projected/7b047505-d780-4596-86a8-92c7a3e8a07c-kube-api-access-f4t2j\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296187 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07701433-aa2e-4b7a-a542-a1c4ecd5135e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296203 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-console-config\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296218 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296232 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-encryption-config\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296248 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94gg\" (UniqueName: \"kubernetes.io/projected/ba2eb0b7-43ae-49a7-9a19-c969039de168-kube-api-access-j94gg\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296263 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296286 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296301 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-images\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296316 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba2eb0b7-43ae-49a7-9a19-c969039de168-audit-dir\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296332 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtt4\" (UniqueName: \"kubernetes.io/projected/72257f30-9f17-4974-aeec-0755be040824-kube-api-access-cwtt4\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296353 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296369 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-oauth-config\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.297104 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-config\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.296669 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.297993 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.298084 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.298318 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.298787 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-client-ca\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.299220 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wbdh2"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.299361 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-config\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.299492 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.299857 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311127 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kv4vj"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311618 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311688 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311843 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57fb87b-8cec-4c88-a802-69631aef1a2e-serving-cert\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311891 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.311914 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-serving-cert\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312004 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312254 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-images\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312454 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vpfb7\" (UID: \"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312533 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312664 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07701433-aa2e-4b7a-a542-a1c4ecd5135e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.312502 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba2eb0b7-43ae-49a7-9a19-c969039de168-audit-dir\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.313762 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.313791 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b874959-d450-49f1-ab62-1852a45fc258-trusted-ca\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.313864 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.314180 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-etcd-client\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.314212 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.314565 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5542df19-2024-4e82-a6b4-ba27c678a6f3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.314646 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.297117 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-audit-policies\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.314737 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-client-ca\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.315167 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.315278 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.315372 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-config\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.315458 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba2eb0b7-43ae-49a7-9a19-c969039de168-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.315490 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.315705 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.316357 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f90b820-57dd-4be0-9648-de26783bc914-audit-dir\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.316729 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-config\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.317074 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b047505-d780-4596-86a8-92c7a3e8a07c-service-ca-bundle\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.317253 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.317257 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1667d84-12f5-4cb0-9a46-f69c25bea89d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.317515 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.317695 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-audit-policies\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.317865 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b874959-d450-49f1-ab62-1852a45fc258-config\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.318150 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.318262 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.319103 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.319223 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.319396 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.319888 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.320124 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.320696 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.321399 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-857bd"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.323618 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.323717 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.326326 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9b4mz"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.326521 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5542df19-2024-4e82-a6b4-ba27c678a6f3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.326537 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b047505-d780-4596-86a8-92c7a3e8a07c-serving-cert\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.326720 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72257f30-9f17-4974-aeec-0755be040824-serving-cert\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.326794 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.327103 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b874959-d450-49f1-ab62-1852a45fc258-serving-cert\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.327204 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.327990 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-v5ljt"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.328319 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.328376 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.328383 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.328803 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.329214 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-24pkt"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.329657 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.329789 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.330010 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.330029 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.330806 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.330870 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1667d84-12f5-4cb0-9a46-f69c25bea89d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.331031 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.331703 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.331811 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.332341 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.334711 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7t586"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.334763 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9s9f"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.334774 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8g6lv"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.335990 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wbdh2"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.337637 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t8cfv"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.337823 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07701433-aa2e-4b7a-a542-a1c4ecd5135e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.337953 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7hzt4"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.338317 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba2eb0b7-43ae-49a7-9a19-c969039de168-encryption-config\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.341707 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.346150 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.346407 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.349368 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.351445 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.358933 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kv4vj"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.361189 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.364844 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.365094 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.367198 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.370173 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.372444 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.374085 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.379695 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.384682 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-24pkt"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.386416 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.387578 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.390639 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.392190 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9b4mz"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.394631 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.395549 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.396849 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.396883 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-config-volume\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.396909 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6c94df9-bcdf-40c8-9217-781d33efd3db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.396933 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.396955 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/39491ce6-ed96-48da-92ed-17b549f1da0e-srv-cert\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397005 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzss9\" (UniqueName: \"kubernetes.io/projected/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-kube-api-access-fzss9\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397026 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srczq\" (UniqueName: \"kubernetes.io/projected/7df2707d-94e8-4d29-84e8-14a50058f164-kube-api-access-srczq\") pod \"multus-admission-controller-857f4d67dd-8g6lv\" (UID: \"7df2707d-94e8-4d29-84e8-14a50058f164\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397045 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-service-ca\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397068 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-serving-cert\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397087 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-metrics-tls\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397108 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6c94df9-bcdf-40c8-9217-781d33efd3db-metrics-tls\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397175 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-oauth-serving-cert\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397197 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx88n\" (UniqueName: \"kubernetes.io/projected/39491ce6-ed96-48da-92ed-17b549f1da0e-kube-api-access-kx88n\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397218 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-trusted-ca-bundle\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397241 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkf95\" (UniqueName: \"kubernetes.io/projected/d6ea7686-9c0f-4e31-9d58-05888aedccc1-kube-api-access-mkf95\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397310 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-config\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397364 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-console-config\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397385 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6c94df9-bcdf-40c8-9217-781d33efd3db-trusted-ca\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397406 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ce5be48-c020-46f1-b576-07d8fb6197db-signing-cabundle\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397466 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4275h\" (UniqueName: \"kubernetes.io/projected/b6c94df9-bcdf-40c8-9217-781d33efd3db-kube-api-access-4275h\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397489 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22f8\" (UniqueName: \"kubernetes.io/projected/1ce5be48-c020-46f1-b576-07d8fb6197db-kube-api-access-z22f8\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397507 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-ca\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397524 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-client\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397547 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-oauth-config\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397570 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8npv4\" (UniqueName: \"kubernetes.io/projected/81d43c37-4152-47d0-be95-a390693902e9-kube-api-access-8npv4\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397611 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/39491ce6-ed96-48da-92ed-17b549f1da0e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397635 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ea7686-9c0f-4e31-9d58-05888aedccc1-serving-cert\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397721 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-service-ca\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397805 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pd4\" (UniqueName: \"kubernetes.io/projected/4d7e321a-a057-40e4-9826-4d9b8b46b30a-kube-api-access-v5pd4\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397860 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6lk\" (UniqueName: \"kubernetes.io/projected/f5870a5b-0a7c-4b87-8894-d7b88a9864ba-kube-api-access-5s6lk\") pod \"ingress-canary-7hzt4\" (UID: \"f5870a5b-0a7c-4b87-8894-d7b88a9864ba\") " pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397895 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5870a5b-0a7c-4b87-8894-d7b88a9864ba-cert\") pod \"ingress-canary-7hzt4\" (UID: \"f5870a5b-0a7c-4b87-8894-d7b88a9864ba\") " pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397920 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ce5be48-c020-46f1-b576-07d8fb6197db-signing-key\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.397954 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7df2707d-94e8-4d29-84e8-14a50058f164-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8g6lv\" (UID: \"7df2707d-94e8-4d29-84e8-14a50058f164\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.398125 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.401021 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-oauth-serving-cert\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.403808 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-console-config\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.403838 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-trusted-ca-bundle\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.404381 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.404481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-service-ca\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.405260 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-serving-cert\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.410527 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-oauth-config\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.440345 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvlh\" (UniqueName: \"kubernetes.io/projected/c1667d84-12f5-4cb0-9a46-f69c25bea89d-kube-api-access-gwvlh\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.461610 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgmq\" (UniqueName: \"kubernetes.io/projected/3f90b820-57dd-4be0-9648-de26783bc914-kube-api-access-cwgmq\") pod \"oauth-openshift-558db77b4-f22rt\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498154 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rzjb\" (UniqueName: \"kubernetes.io/projected/5542df19-2024-4e82-a6b4-ba27c678a6f3-kube-api-access-9rzjb\") pod \"openshift-controller-manager-operator-756b6f6bc6-9sccv\" (UID: \"5542df19-2024-4e82-a6b4-ba27c678a6f3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498430 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6c94df9-bcdf-40c8-9217-781d33efd3db-metrics-tls\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498564 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx88n\" (UniqueName: \"kubernetes.io/projected/39491ce6-ed96-48da-92ed-17b549f1da0e-kube-api-access-kx88n\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498599 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkf95\" (UniqueName: \"kubernetes.io/projected/d6ea7686-9c0f-4e31-9d58-05888aedccc1-kube-api-access-mkf95\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498629 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-config\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498662 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6c94df9-bcdf-40c8-9217-781d33efd3db-trusted-ca\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498680 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ce5be48-c020-46f1-b576-07d8fb6197db-signing-cabundle\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498731 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4275h\" (UniqueName: \"kubernetes.io/projected/b6c94df9-bcdf-40c8-9217-781d33efd3db-kube-api-access-4275h\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498749 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22f8\" (UniqueName: \"kubernetes.io/projected/1ce5be48-c020-46f1-b576-07d8fb6197db-kube-api-access-z22f8\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498766 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-ca\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498782 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-client\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498828 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/39491ce6-ed96-48da-92ed-17b549f1da0e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498849 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ea7686-9c0f-4e31-9d58-05888aedccc1-serving-cert\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498915 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pd4\" (UniqueName: \"kubernetes.io/projected/4d7e321a-a057-40e4-9826-4d9b8b46b30a-kube-api-access-v5pd4\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498959 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6lk\" (UniqueName: \"kubernetes.io/projected/f5870a5b-0a7c-4b87-8894-d7b88a9864ba-kube-api-access-5s6lk\") pod \"ingress-canary-7hzt4\" (UID: \"f5870a5b-0a7c-4b87-8894-d7b88a9864ba\") " pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.498999 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5870a5b-0a7c-4b87-8894-d7b88a9864ba-cert\") pod \"ingress-canary-7hzt4\" (UID: \"f5870a5b-0a7c-4b87-8894-d7b88a9864ba\") " pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499021 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ce5be48-c020-46f1-b576-07d8fb6197db-signing-key\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499039 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7df2707d-94e8-4d29-84e8-14a50058f164-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8g6lv\" (UID: \"7df2707d-94e8-4d29-84e8-14a50058f164\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499071 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499091 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-config-volume\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499110 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6c94df9-bcdf-40c8-9217-781d33efd3db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499127 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499155 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/39491ce6-ed96-48da-92ed-17b549f1da0e-srv-cert\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499176 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzss9\" (UniqueName: \"kubernetes.io/projected/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-kube-api-access-fzss9\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srczq\" (UniqueName: \"kubernetes.io/projected/7df2707d-94e8-4d29-84e8-14a50058f164-kube-api-access-srczq\") pod \"multus-admission-controller-857f4d67dd-8g6lv\" (UID: \"7df2707d-94e8-4d29-84e8-14a50058f164\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499224 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-service-ca\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499243 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-metrics-tls\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.499847 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-ca\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.500394 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-config-volume\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.500837 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-config\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.502039 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-metrics-tls\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.502730 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-client\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.502925 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6ea7686-9c0f-4e31-9d58-05888aedccc1-etcd-service-ca\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.503088 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ea7686-9c0f-4e31-9d58-05888aedccc1-serving-cert\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.503540 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5870a5b-0a7c-4b87-8894-d7b88a9864ba-cert\") pod \"ingress-canary-7hzt4\" (UID: \"f5870a5b-0a7c-4b87-8894-d7b88a9864ba\") " pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.504354 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7df2707d-94e8-4d29-84e8-14a50058f164-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8g6lv\" (UID: \"7df2707d-94e8-4d29-84e8-14a50058f164\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.517792 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swfz8\" (UniqueName: \"kubernetes.io/projected/5b874959-d450-49f1-ab62-1852a45fc258-kube-api-access-swfz8\") pod \"console-operator-58897d9998-fwkrq\" (UID: \"5b874959-d450-49f1-ab62-1852a45fc258\") " pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.535261 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.535448 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.535578 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.551841 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94gg\" (UniqueName: \"kubernetes.io/projected/ba2eb0b7-43ae-49a7-9a19-c969039de168-kube-api-access-j94gg\") pod \"apiserver-7bbb656c7d-sbkgn\" (UID: \"ba2eb0b7-43ae-49a7-9a19-c969039de168\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.552353 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.564601 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.745071 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.826250 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.838182 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.838847 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.838966 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.839149 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.857254 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9vgv\" (UniqueName: \"kubernetes.io/projected/e57fb87b-8cec-4c88-a802-69631aef1a2e-kube-api-access-k9vgv\") pod \"controller-manager-879f6c89f-s7pg5\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.857535 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.858255 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.858520 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.861134 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.861502 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.865390 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.866839 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.867025 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.867585 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.867887 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4t2j\" (UniqueName: \"kubernetes.io/projected/7b047505-d780-4596-86a8-92c7a3e8a07c-kube-api-access-f4t2j\") pod \"authentication-operator-69f744f599-88zfp\" (UID: \"7b047505-d780-4596-86a8-92c7a3e8a07c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.872842 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjnn\" (UniqueName: \"kubernetes.io/projected/bb8c16fb-b627-4b4d-8c02-5f9537eea746-kube-api-access-8kjnn\") pod \"downloads-7954f5f757-f7rxw\" (UID: \"bb8c16fb-b627-4b4d-8c02-5f9537eea746\") " pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.874568 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vszf8\" (UniqueName: \"kubernetes.io/projected/247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d-kube-api-access-vszf8\") pod \"cluster-samples-operator-665b6dd947-vpfb7\" (UID: \"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.874707 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.880568 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/39491ce6-ed96-48da-92ed-17b549f1da0e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.881383 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.884031 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/39491ce6-ed96-48da-92ed-17b549f1da0e-srv-cert\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.884789 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.885023 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtt4\" (UniqueName: \"kubernetes.io/projected/72257f30-9f17-4974-aeec-0755be040824-kube-api-access-cwtt4\") pod \"route-controller-manager-6576b87f9c-77j7k\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.893931 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ce5be48-c020-46f1-b576-07d8fb6197db-signing-key\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.896668 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cw5n\" (UniqueName: \"kubernetes.io/projected/07701433-aa2e-4b7a-a542-a1c4ecd5135e-kube-api-access-8cw5n\") pod \"openshift-apiserver-operator-796bbdcf4f-dqmc8\" (UID: \"07701433-aa2e-4b7a-a542-a1c4ecd5135e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.899674 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.906093 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-phwjz"] Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.906366 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.911310 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ce5be48-c020-46f1-b576-07d8fb6197db-signing-cabundle\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.925132 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.945177 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.965446 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.965729 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.971542 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:47 crc kubenswrapper[4825]: I0122 15:25:47.986069 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.004368 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.024425 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.035930 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.062060 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwfhz\" (UniqueName: \"kubernetes.io/projected/6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73-kube-api-access-mwfhz\") pod \"machine-api-operator-5694c8668f-82rs5\" (UID: \"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.064270 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.086128 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.118759 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1667d84-12f5-4cb0-9a46-f69c25bea89d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-94qsw\" (UID: \"c1667d84-12f5-4cb0-9a46-f69c25bea89d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.124608 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.126609 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: W0122 15:25:48.130554 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb10bb8_1d41_433a_8f08_2edf3eefaa7c.slice/crio-f8e21ba190f47255abdbce759a11934040ab9d67d3f4ecb62e5f71bca79eabd5 WatchSource:0}: Error finding container f8e21ba190f47255abdbce759a11934040ab9d67d3f4ecb62e5f71bca79eabd5: Status 404 returned error can't find the container with id f8e21ba190f47255abdbce759a11934040ab9d67d3f4ecb62e5f71bca79eabd5 Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.147080 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.161403 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" event={"ID":"f0ede596-947d-4382-ac5a-45121bf9399d","Type":"ContainerStarted","Data":"eb4cdc9f5ec5dd88eb8c73201bdb60cad7721afd4f1355513ee0467f0aac1e93"} Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.165721 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.172165 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6c94df9-bcdf-40c8-9217-781d33efd3db-metrics-tls\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.194571 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.200872 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6c94df9-bcdf-40c8-9217-781d33efd3db-trusted-ca\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.205716 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.279321 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.279558 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.286179 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.306443 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.310554 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.312394 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.323478 4825 request.go:700] Waited for 1.003901946s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0 Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.325675 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.357475 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.364564 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.395203 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.409836 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.425510 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.445429 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.465229 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.486318 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.509857 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.516276 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.546318 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.559605 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fwkrq"] Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.567130 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.586263 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.604762 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.627754 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.646053 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.665477 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv"] Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.667734 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.685229 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.704807 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.723416 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7pg5"] Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.724852 4825 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.744744 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.764725 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.786878 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.805401 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.824776 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.832702 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f22rt"] Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.846846 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.864225 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.885645 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.905642 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.925452 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.934946 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-82rs5"] Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.944833 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.964666 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 15:25:48 crc kubenswrapper[4825]: I0122 15:25:48.984634 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.004768 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.027246 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.037170 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.044894 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.051458 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-88zfp"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.063893 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.070499 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.085036 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.086562 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 15:25:49 crc kubenswrapper[4825]: W0122 15:25:49.088570 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b047505_d780_4596_86a8_92c7a3e8a07c.slice/crio-ea6f720c04e2cbcf4426e05f30cfffc2a345863a71c99600406d73e5f354e2b7 WatchSource:0}: Error finding container ea6f720c04e2cbcf4426e05f30cfffc2a345863a71c99600406d73e5f354e2b7: Status 404 returned error can't find the container with id ea6f720c04e2cbcf4426e05f30cfffc2a345863a71c99600406d73e5f354e2b7 Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.093953 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f7rxw"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.096015 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.098413 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn"] Jan 22 15:25:49 crc kubenswrapper[4825]: W0122 15:25:49.102632 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07701433_aa2e_4b7a_a542_a1c4ecd5135e.slice/crio-6f4eb701f8d2062ec84df19a1fd514e0e6b180c1b90404758b462e1e2d9f7edf WatchSource:0}: Error finding container 6f4eb701f8d2062ec84df19a1fd514e0e6b180c1b90404758b462e1e2d9f7edf: Status 404 returned error can't find the container with id 6f4eb701f8d2062ec84df19a1fd514e0e6b180c1b90404758b462e1e2d9f7edf Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.104016 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.125316 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 15:25:49 crc kubenswrapper[4825]: W0122 15:25:49.132710 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72257f30_9f17_4974_aeec_0755be040824.slice/crio-cd87fc9a5352a354d0f3e7ac91453d10c8f7da4e5dd099c86c053faea289dd33 WatchSource:0}: Error finding container cd87fc9a5352a354d0f3e7ac91453d10c8f7da4e5dd099c86c053faea289dd33: Status 404 returned error can't find the container with id cd87fc9a5352a354d0f3e7ac91453d10c8f7da4e5dd099c86c053faea289dd33 Jan 22 15:25:49 crc kubenswrapper[4825]: W0122 15:25:49.136826 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba2eb0b7_43ae_49a7_9a19_c969039de168.slice/crio-7d9ebc41110fc5648f9ed77a6d31a641feba527f22461e90afe3faf4ae24268d WatchSource:0}: Error finding container 7d9ebc41110fc5648f9ed77a6d31a641feba527f22461e90afe3faf4ae24268d: Status 404 returned error can't find the container with id 7d9ebc41110fc5648f9ed77a6d31a641feba527f22461e90afe3faf4ae24268d Jan 22 15:25:49 crc kubenswrapper[4825]: W0122 15:25:49.137943 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1667d84_12f5_4cb0_9a46_f69c25bea89d.slice/crio-48be370fba4595b37562e075715aff129e7e96c2fe26fdcd459f9ce4cbe0a45a WatchSource:0}: Error finding container 48be370fba4595b37562e075715aff129e7e96c2fe26fdcd459f9ce4cbe0a45a: Status 404 returned error can't find the container with id 48be370fba4595b37562e075715aff129e7e96c2fe26fdcd459f9ce4cbe0a45a Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.165949 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8npv4\" (UniqueName: \"kubernetes.io/projected/81d43c37-4152-47d0-be95-a390693902e9-kube-api-access-8npv4\") pod \"console-f9d7485db-qvds8\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.172346 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" event={"ID":"3f90b820-57dd-4be0-9648-de26783bc914","Type":"ContainerStarted","Data":"87d705a39f6f429067200fa2f4f83ce06e9a93f7999abb20cdb63f3047033498"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.173581 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" event={"ID":"5542df19-2024-4e82-a6b4-ba27c678a6f3","Type":"ContainerStarted","Data":"9ee731be5ee9c620bd72e651cf2bbab07857a8f5de88951fac8eb1e3a4444f22"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.176188 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" event={"ID":"7b047505-d780-4596-86a8-92c7a3e8a07c","Type":"ContainerStarted","Data":"ea6f720c04e2cbcf4426e05f30cfffc2a345863a71c99600406d73e5f354e2b7"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.177704 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" event={"ID":"72257f30-9f17-4974-aeec-0755be040824","Type":"ContainerStarted","Data":"cd87fc9a5352a354d0f3e7ac91453d10c8f7da4e5dd099c86c053faea289dd33"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.178904 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" event={"ID":"c1667d84-12f5-4cb0-9a46-f69c25bea89d","Type":"ContainerStarted","Data":"48be370fba4595b37562e075715aff129e7e96c2fe26fdcd459f9ce4cbe0a45a"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.179704 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" event={"ID":"ba2eb0b7-43ae-49a7-9a19-c969039de168","Type":"ContainerStarted","Data":"7d9ebc41110fc5648f9ed77a6d31a641feba527f22461e90afe3faf4ae24268d"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.180111 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx88n\" (UniqueName: \"kubernetes.io/projected/39491ce6-ed96-48da-92ed-17b549f1da0e-kube-api-access-kx88n\") pod \"olm-operator-6b444d44fb-x9w66\" (UID: \"39491ce6-ed96-48da-92ed-17b549f1da0e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.180647 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" event={"ID":"e57fb87b-8cec-4c88-a802-69631aef1a2e","Type":"ContainerStarted","Data":"e900a7bb1f932c28e518998b40924e934eb93085cb2f086edb22d4f29dc2204d"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.181922 4825 generic.go:334] "Generic (PLEG): container finished" podID="aeb10bb8-1d41-433a-8f08-2edf3eefaa7c" containerID="8a97ee78b3be1b42120ac6e4ed78af7ea983fbb37e0ef5006cb2e2ffe8a51865" exitCode=0 Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.182007 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" event={"ID":"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c","Type":"ContainerDied","Data":"8a97ee78b3be1b42120ac6e4ed78af7ea983fbb37e0ef5006cb2e2ffe8a51865"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.182041 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" event={"ID":"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c","Type":"ContainerStarted","Data":"f8e21ba190f47255abdbce759a11934040ab9d67d3f4ecb62e5f71bca79eabd5"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.183349 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" event={"ID":"5b874959-d450-49f1-ab62-1852a45fc258","Type":"ContainerStarted","Data":"098c73e1a434855ecaa522707c30c65d5c5c481dfc1e2078852f4e6fde1d1602"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.183401 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" event={"ID":"5b874959-d450-49f1-ab62-1852a45fc258","Type":"ContainerStarted","Data":"31cd88984a416c44ffd01400152f369754fb45842348c685642a57ee11da8f9f"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.188113 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" event={"ID":"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73","Type":"ContainerStarted","Data":"4dac3081e3655ed58f3b988f72454e5247b18034581746886a429ab7f70c6672"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.190664 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" event={"ID":"07701433-aa2e-4b7a-a542-a1c4ecd5135e","Type":"ContainerStarted","Data":"6f4eb701f8d2062ec84df19a1fd514e0e6b180c1b90404758b462e1e2d9f7edf"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.191790 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7rxw" event={"ID":"bb8c16fb-b627-4b4d-8c02-5f9537eea746","Type":"ContainerStarted","Data":"e9402fdd2e15fde9ea939ffb72b0d851897101e8f3e1918c626c7df0f37e7e13"} Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.203891 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6lk\" (UniqueName: \"kubernetes.io/projected/f5870a5b-0a7c-4b87-8894-d7b88a9864ba-kube-api-access-5s6lk\") pod \"ingress-canary-7hzt4\" (UID: \"f5870a5b-0a7c-4b87-8894-d7b88a9864ba\") " pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.207940 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.226287 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkf95\" (UniqueName: \"kubernetes.io/projected/d6ea7686-9c0f-4e31-9d58-05888aedccc1-kube-api-access-mkf95\") pod \"etcd-operator-b45778765-t8cfv\" (UID: \"d6ea7686-9c0f-4e31-9d58-05888aedccc1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.263093 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6c94df9-bcdf-40c8-9217-781d33efd3db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.265026 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4275h\" (UniqueName: \"kubernetes.io/projected/b6c94df9-bcdf-40c8-9217-781d33efd3db-kube-api-access-4275h\") pod \"ingress-operator-5b745b69d9-c8lt5\" (UID: \"b6c94df9-bcdf-40c8-9217-781d33efd3db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.280937 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.286213 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22f8\" (UniqueName: \"kubernetes.io/projected/1ce5be48-c020-46f1-b576-07d8fb6197db-kube-api-access-z22f8\") pod \"service-ca-9c57cc56f-wbdh2\" (UID: \"1ce5be48-c020-46f1-b576-07d8fb6197db\") " pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.300372 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pd4\" (UniqueName: \"kubernetes.io/projected/4d7e321a-a057-40e4-9826-4d9b8b46b30a-kube-api-access-v5pd4\") pod \"marketplace-operator-79b997595-kv4vj\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.321028 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzss9\" (UniqueName: \"kubernetes.io/projected/2ce8aeac-1477-47a7-88ce-d0a46c66c5d6-kube-api-access-fzss9\") pod \"dns-default-7t586\" (UID: \"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6\") " pod="openshift-dns/dns-default-7t586" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.324259 4825 request.go:700] Waited for 1.823733161s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.328402 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7hzt4" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.337722 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.348479 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srczq\" (UniqueName: \"kubernetes.io/projected/7df2707d-94e8-4d29-84e8-14a50058f164-kube-api-access-srczq\") pod \"multus-admission-controller-857f4d67dd-8g6lv\" (UID: \"7df2707d-94e8-4d29-84e8-14a50058f164\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.350269 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.350782 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.353399 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.358356 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.364395 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.384119 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.409028 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.444357 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.465381 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.496844 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.496887 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b829617-fb00-4e94-8cd8-539b13ffc74f-auth-proxy-config\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.496927 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f68db0-1962-4d15-a903-7eb1fb30d414-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.496942 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4b829617-fb00-4e94-8cd8-539b13ffc74f-machine-approver-tls\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.496965 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55n9\" (UniqueName: \"kubernetes.io/projected/40f68db0-1962-4d15-a903-7eb1fb30d414-kube-api-access-g55n9\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497031 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabad593-88eb-46e8-bd45-c8167c3466ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497086 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f68db0-1962-4d15-a903-7eb1fb30d414-proxy-tls\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497107 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rh8h\" (UniqueName: \"kubernetes.io/projected/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-kube-api-access-4rh8h\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497182 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a05d11ae-f760-4d26-8321-3ff0a38b4177-proxy-tls\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497198 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qd4\" (UniqueName: \"kubernetes.io/projected/aabad593-88eb-46e8-bd45-c8167c3466ae-kube-api-access-g5qd4\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497213 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858lz\" (UniqueName: \"kubernetes.io/projected/4b829617-fb00-4e94-8cd8-539b13ffc74f-kube-api-access-858lz\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497240 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497298 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gcj\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-kube-api-access-68gcj\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497353 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-certificates\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497382 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497407 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-trusted-ca\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497489 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-tls\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497544 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497670 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabad593-88eb-46e8-bd45-c8167c3466ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497736 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57hz\" (UniqueName: \"kubernetes.io/projected/a05d11ae-f760-4d26-8321-3ff0a38b4177-kube-api-access-d57hz\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497811 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497845 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a05d11ae-f760-4d26-8321-3ff0a38b4177-images\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497882 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-bound-sa-token\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.497922 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a05d11ae-f760-4d26-8321-3ff0a38b4177-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.498009 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b829617-fb00-4e94-8cd8-539b13ffc74f-config\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.502414 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.002398941 +0000 UTC m=+96.763925851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.574725 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7t586" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.598371 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.598477 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.098456277 +0000 UTC m=+96.859983187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.598548 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599095 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d57hz\" (UniqueName: \"kubernetes.io/projected/a05d11ae-f760-4d26-8321-3ff0a38b4177-kube-api-access-d57hz\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599131 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599152 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a05d11ae-f760-4d26-8321-3ff0a38b4177-images\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599169 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzgt\" (UniqueName: \"kubernetes.io/projected/98362ccb-0056-41c9-b958-ca0a11e30c45-kube-api-access-flzgt\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599202 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-bound-sa-token\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599217 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/095c6359-a33b-4176-becb-f60758bb28b4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xhfx7\" (UID: \"095c6359-a33b-4176-becb-f60758bb28b4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599243 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a05d11ae-f760-4d26-8321-3ff0a38b4177-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599293 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b829617-fb00-4e94-8cd8-539b13ffc74f-config\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599321 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fb87ec5-b498-4153-b880-c42bfdd2089c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-89hpb\" (UID: \"2fb87ec5-b498-4153-b880-c42bfdd2089c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599361 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599376 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b829617-fb00-4e94-8cd8-539b13ffc74f-auth-proxy-config\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599390 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98362ccb-0056-41c9-b958-ca0a11e30c45-profile-collector-cert\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.599404 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-plugins-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.602188 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a05d11ae-f760-4d26-8321-3ff0a38b4177-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.602536 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a05d11ae-f760-4d26-8321-3ff0a38b4177-images\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604065 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b829617-fb00-4e94-8cd8-539b13ffc74f-config\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604070 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpqc\" (UniqueName: \"kubernetes.io/projected/095c6359-a33b-4176-becb-f60758bb28b4-kube-api-access-khpqc\") pod \"control-plane-machine-set-operator-78cbb6b69f-xhfx7\" (UID: \"095c6359-a33b-4176-becb-f60758bb28b4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604133 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f68db0-1962-4d15-a903-7eb1fb30d414-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604167 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-default-certificate\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604189 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af488f77-ce41-4294-a385-1b08650660d0-config\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604241 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4b829617-fb00-4e94-8cd8-539b13ffc74f-machine-approver-tls\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604267 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25qq\" (UniqueName: \"kubernetes.io/projected/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-kube-api-access-m25qq\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604291 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g55n9\" (UniqueName: \"kubernetes.io/projected/40f68db0-1962-4d15-a903-7eb1fb30d414-kube-api-access-g55n9\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604310 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604358 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhchv\" (UniqueName: \"kubernetes.io/projected/2fb87ec5-b498-4153-b880-c42bfdd2089c-kube-api-access-xhchv\") pod \"package-server-manager-789f6589d5-89hpb\" (UID: \"2fb87ec5-b498-4153-b880-c42bfdd2089c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabad593-88eb-46e8-bd45-c8167c3466ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604431 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xds\" (UniqueName: \"kubernetes.io/projected/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-kube-api-access-s8xds\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604455 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlfq\" (UniqueName: \"kubernetes.io/projected/288b665c-d9e3-4f3c-93f5-4632d98b9028-kube-api-access-lmlfq\") pod \"dns-operator-744455d44c-24pkt\" (UID: \"288b665c-d9e3-4f3c-93f5-4632d98b9028\") " pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604481 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f68db0-1962-4d15-a903-7eb1fb30d414-proxy-tls\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604535 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rh8h\" (UniqueName: \"kubernetes.io/projected/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-kube-api-access-4rh8h\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604562 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604623 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b829617-fb00-4e94-8cd8-539b13ffc74f-auth-proxy-config\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.604861 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-config\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605114 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605148 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-config\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605168 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3290d6b-2824-4136-ac65-df0fa10995d9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605224 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f68db0-1962-4d15-a903-7eb1fb30d414-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605262 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabad593-88eb-46e8-bd45-c8167c3466ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605412 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-stats-auth\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605450 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-metrics-certs\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605472 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-csi-data-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605498 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1132329f-90a4-4bcf-a303-28ec140c7c3f-tmpfs\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605561 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a05d11ae-f760-4d26-8321-3ff0a38b4177-proxy-tls\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605588 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5qd4\" (UniqueName: \"kubernetes.io/projected/aabad593-88eb-46e8-bd45-c8167c3466ae-kube-api-access-g5qd4\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605712 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605737 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858lz\" (UniqueName: \"kubernetes.io/projected/4b829617-fb00-4e94-8cd8-539b13ffc74f-kube-api-access-858lz\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605763 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3290d6b-2824-4136-ac65-df0fa10995d9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605786 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pk5\" (UniqueName: \"kubernetes.io/projected/1132329f-90a4-4bcf-a303-28ec140c7c3f-kube-api-access-w9pk5\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605833 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-config-volume\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605889 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605936 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/288b665c-d9e3-4f3c-93f5-4632d98b9028-metrics-tls\") pod \"dns-operator-744455d44c-24pkt\" (UID: \"288b665c-d9e3-4f3c-93f5-4632d98b9028\") " pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.605976 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-certs\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606037 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttz6m\" (UniqueName: \"kubernetes.io/projected/1cbd0ab2-dc30-46da-8442-206675e887a4-kube-api-access-ttz6m\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606091 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68gcj\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-kube-api-access-68gcj\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606159 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-mountpoint-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606199 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-certificates\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606220 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606238 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-socket-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606259 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-registration-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606319 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-trusted-ca\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606343 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-secret-volume\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606394 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1132329f-90a4-4bcf-a303-28ec140c7c3f-webhook-cert\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606416 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606444 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda22efd-beea-406e-a9d8-cb04fac11b9c-service-ca-bundle\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606473 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-tls\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606492 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv2xx\" (UniqueName: \"kubernetes.io/projected/bda22efd-beea-406e-a9d8-cb04fac11b9c-kube-api-access-vv2xx\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606559 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606582 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af488f77-ce41-4294-a385-1b08650660d0-serving-cert\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606607 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98362ccb-0056-41c9-b958-ca0a11e30c45-srv-cert\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606631 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1132329f-90a4-4bcf-a303-28ec140c7c3f-apiservice-cert\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606688 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgxv\" (UniqueName: \"kubernetes.io/projected/af488f77-ce41-4294-a385-1b08650660d0-kube-api-access-5wgxv\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606737 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-node-bootstrap-token\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606766 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabad593-88eb-46e8-bd45-c8167c3466ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606811 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3290d6b-2824-4136-ac65-df0fa10995d9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.606854 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hvdg\" (UniqueName: \"kubernetes.io/projected/9143af67-fb1d-4ef5-a862-bbd9a1afd2d8-kube-api-access-2hvdg\") pod \"migrator-59844c95c7-4g57x\" (UID: \"9143af67-fb1d-4ef5-a862-bbd9a1afd2d8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.608275 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-certificates\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.609711 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f68db0-1962-4d15-a903-7eb1fb30d414-proxy-tls\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.610410 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.110398268 +0000 UTC m=+96.871925178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.614234 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.614272 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.618391 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a05d11ae-f760-4d26-8321-3ff0a38b4177-proxy-tls\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.619815 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-trusted-ca\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.620435 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-tls\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.621108 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4b829617-fb00-4e94-8cd8-539b13ffc74f-machine-approver-tls\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.629643 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabad593-88eb-46e8-bd45-c8167c3466ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.634338 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-serving-cert\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.634652 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d57hz\" (UniqueName: \"kubernetes.io/projected/a05d11ae-f760-4d26-8321-3ff0a38b4177-kube-api-access-d57hz\") pod \"machine-config-operator-74547568cd-ckqzw\" (UID: \"a05d11ae-f760-4d26-8321-3ff0a38b4177\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.649338 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g55n9\" (UniqueName: \"kubernetes.io/projected/40f68db0-1962-4d15-a903-7eb1fb30d414-kube-api-access-g55n9\") pod \"machine-config-controller-84d6567774-96g5g\" (UID: \"40f68db0-1962-4d15-a903-7eb1fb30d414\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.669319 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-bound-sa-token\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.696530 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.697801 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qvds8"] Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.718528 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.718602 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.21858542 +0000 UTC m=+96.980112330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719553 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-mountpoint-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719576 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-socket-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719599 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-registration-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719622 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-secret-volume\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719643 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1132329f-90a4-4bcf-a303-28ec140c7c3f-webhook-cert\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719662 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719680 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda22efd-beea-406e-a9d8-cb04fac11b9c-service-ca-bundle\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719698 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2xx\" (UniqueName: \"kubernetes.io/projected/bda22efd-beea-406e-a9d8-cb04fac11b9c-kube-api-access-vv2xx\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719732 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719753 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af488f77-ce41-4294-a385-1b08650660d0-serving-cert\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719772 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98362ccb-0056-41c9-b958-ca0a11e30c45-srv-cert\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719789 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1132329f-90a4-4bcf-a303-28ec140c7c3f-apiservice-cert\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719818 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgxv\" (UniqueName: \"kubernetes.io/projected/af488f77-ce41-4294-a385-1b08650660d0-kube-api-access-5wgxv\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719838 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-node-bootstrap-token\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719859 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3290d6b-2824-4136-ac65-df0fa10995d9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719878 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hvdg\" (UniqueName: \"kubernetes.io/projected/9143af67-fb1d-4ef5-a862-bbd9a1afd2d8-kube-api-access-2hvdg\") pod \"migrator-59844c95c7-4g57x\" (UID: \"9143af67-fb1d-4ef5-a862-bbd9a1afd2d8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719904 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flzgt\" (UniqueName: \"kubernetes.io/projected/98362ccb-0056-41c9-b958-ca0a11e30c45-kube-api-access-flzgt\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719927 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/095c6359-a33b-4176-becb-f60758bb28b4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xhfx7\" (UID: \"095c6359-a33b-4176-becb-f60758bb28b4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.719959 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fb87ec5-b498-4153-b880-c42bfdd2089c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-89hpb\" (UID: \"2fb87ec5-b498-4153-b880-c42bfdd2089c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720003 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-plugins-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720023 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98362ccb-0056-41c9-b958-ca0a11e30c45-profile-collector-cert\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720044 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpqc\" (UniqueName: \"kubernetes.io/projected/095c6359-a33b-4176-becb-f60758bb28b4-kube-api-access-khpqc\") pod \"control-plane-machine-set-operator-78cbb6b69f-xhfx7\" (UID: \"095c6359-a33b-4176-becb-f60758bb28b4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720063 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-default-certificate\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720080 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af488f77-ce41-4294-a385-1b08650660d0-config\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720101 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25qq\" (UniqueName: \"kubernetes.io/projected/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-kube-api-access-m25qq\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720127 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhchv\" (UniqueName: \"kubernetes.io/projected/2fb87ec5-b498-4153-b880-c42bfdd2089c-kube-api-access-xhchv\") pod \"package-server-manager-789f6589d5-89hpb\" (UID: \"2fb87ec5-b498-4153-b880-c42bfdd2089c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720153 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xds\" (UniqueName: \"kubernetes.io/projected/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-kube-api-access-s8xds\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720171 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmlfq\" (UniqueName: \"kubernetes.io/projected/288b665c-d9e3-4f3c-93f5-4632d98b9028-kube-api-access-lmlfq\") pod \"dns-operator-744455d44c-24pkt\" (UID: \"288b665c-d9e3-4f3c-93f5-4632d98b9028\") " pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720191 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720235 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-config\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720256 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720275 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-config\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720291 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3290d6b-2824-4136-ac65-df0fa10995d9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720307 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-stats-auth\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720324 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-metrics-certs\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720341 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-csi-data-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720359 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1132329f-90a4-4bcf-a303-28ec140c7c3f-tmpfs\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3290d6b-2824-4136-ac65-df0fa10995d9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720412 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pk5\" (UniqueName: \"kubernetes.io/projected/1132329f-90a4-4bcf-a303-28ec140c7c3f-kube-api-access-w9pk5\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720433 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-config-volume\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720453 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720471 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/288b665c-d9e3-4f3c-93f5-4632d98b9028-metrics-tls\") pod \"dns-operator-744455d44c-24pkt\" (UID: \"288b665c-d9e3-4f3c-93f5-4632d98b9028\") " pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720490 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttz6m\" (UniqueName: \"kubernetes.io/projected/1cbd0ab2-dc30-46da-8442-206675e887a4-kube-api-access-ttz6m\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.720507 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-certs\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.721244 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-mountpoint-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.721447 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-socket-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.721495 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-registration-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.760261 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-certs\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.763037 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda22efd-beea-406e-a9d8-cb04fac11b9c-service-ca-bundle\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.763558 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.263543845 +0000 UTC m=+97.025070755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.766346 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af488f77-ce41-4294-a385-1b08650660d0-serving-cert\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.845256 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-plugins-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.859663 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd0ab2-dc30-46da-8442-206675e887a4-csi-data-dir\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.867340 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.868035 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.368018972 +0000 UTC m=+97.129545882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.871786 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-default-certificate\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.873431 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98362ccb-0056-41c9-b958-ca0a11e30c45-srv-cert\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.877837 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3290d6b-2824-4136-ac65-df0fa10995d9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.887246 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-config\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.887809 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/095c6359-a33b-4176-becb-f60758bb28b4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xhfx7\" (UID: \"095c6359-a33b-4176-becb-f60758bb28b4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.890733 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.895356 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af488f77-ce41-4294-a385-1b08650660d0-config\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.896042 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-secret-volume\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.897359 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3290d6b-2824-4136-ac65-df0fa10995d9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.899366 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.906947 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5qd4\" (UniqueName: \"kubernetes.io/projected/aabad593-88eb-46e8-bd45-c8167c3466ae-kube-api-access-g5qd4\") pod \"kube-storage-version-migrator-operator-b67b599dd-ncxt7\" (UID: \"aabad593-88eb-46e8-bd45-c8167c3466ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.907495 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rh8h\" (UniqueName: \"kubernetes.io/projected/2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1-kube-api-access-4rh8h\") pod \"openshift-config-operator-7777fb866f-wvhkx\" (UID: \"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.907685 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1132329f-90a4-4bcf-a303-28ec140c7c3f-webhook-cert\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.910281 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/288b665c-d9e3-4f3c-93f5-4632d98b9028-metrics-tls\") pod \"dns-operator-744455d44c-24pkt\" (UID: \"288b665c-d9e3-4f3c-93f5-4632d98b9028\") " pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.911236 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1132329f-90a4-4bcf-a303-28ec140c7c3f-tmpfs\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.911961 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68gcj\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-kube-api-access-68gcj\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.916172 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-config-volume\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.917107 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98362ccb-0056-41c9-b958-ca0a11e30c45-profile-collector-cert\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.918498 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-config\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.921262 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpqc\" (UniqueName: \"kubernetes.io/projected/095c6359-a33b-4176-becb-f60758bb28b4-kube-api-access-khpqc\") pod \"control-plane-machine-set-operator-78cbb6b69f-xhfx7\" (UID: \"095c6359-a33b-4176-becb-f60758bb28b4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.936335 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fb87ec5-b498-4153-b880-c42bfdd2089c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-89hpb\" (UID: \"2fb87ec5-b498-4153-b880-c42bfdd2089c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.936677 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.938194 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25qq\" (UniqueName: \"kubernetes.io/projected/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-kube-api-access-m25qq\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.946174 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-stats-auth\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.947820 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5226e90c-98ae-4cb1-910e-e8c031ab7c8a-node-bootstrap-token\") pod \"machine-config-server-857bd\" (UID: \"5226e90c-98ae-4cb1-910e-e8c031ab7c8a\") " pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.948204 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xds\" (UniqueName: \"kubernetes.io/projected/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-kube-api-access-s8xds\") pod \"collect-profiles-29484915-hkqwz\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.949558 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hvdg\" (UniqueName: \"kubernetes.io/projected/9143af67-fb1d-4ef5-a862-bbd9a1afd2d8-kube-api-access-2hvdg\") pod \"migrator-59844c95c7-4g57x\" (UID: \"9143af67-fb1d-4ef5-a862-bbd9a1afd2d8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.949791 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhchv\" (UniqueName: \"kubernetes.io/projected/2fb87ec5-b498-4153-b880-c42bfdd2089c-kube-api-access-xhchv\") pod \"package-server-manager-789f6589d5-89hpb\" (UID: \"2fb87ec5-b498-4153-b880-c42bfdd2089c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.950042 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1132329f-90a4-4bcf-a303-28ec140c7c3f-apiservice-cert\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.950308 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/579c0fe7-fbc1-4262-89d1-a8abfb6fc655-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j2wkw\" (UID: \"579c0fe7-fbc1-4262-89d1-a8abfb6fc655\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.954577 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmlfq\" (UniqueName: \"kubernetes.io/projected/288b665c-d9e3-4f3c-93f5-4632d98b9028-kube-api-access-lmlfq\") pod \"dns-operator-744455d44c-24pkt\" (UID: \"288b665c-d9e3-4f3c-93f5-4632d98b9028\") " pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.958999 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda22efd-beea-406e-a9d8-cb04fac11b9c-metrics-certs\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.965358 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3aa1598-50ce-4efb-98b6-ae06c5ce75af-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qvrsf\" (UID: \"e3aa1598-50ce-4efb-98b6-ae06c5ce75af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.965764 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858lz\" (UniqueName: \"kubernetes.io/projected/4b829617-fb00-4e94-8cd8-539b13ffc74f-kube-api-access-858lz\") pod \"machine-approver-56656f9798-6jdgh\" (UID: \"4b829617-fb00-4e94-8cd8-539b13ffc74f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.971539 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-857bd" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.974956 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:49 crc kubenswrapper[4825]: E0122 15:25:49.975492 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.475476064 +0000 UTC m=+97.237002974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.977652 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.980026 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgxv\" (UniqueName: \"kubernetes.io/projected/af488f77-ce41-4294-a385-1b08650660d0-kube-api-access-5wgxv\") pod \"service-ca-operator-777779d784-gzl8z\" (UID: \"af488f77-ce41-4294-a385-1b08650660d0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.992213 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv2xx\" (UniqueName: \"kubernetes.io/projected/bda22efd-beea-406e-a9d8-cb04fac11b9c-kube-api-access-vv2xx\") pod \"router-default-5444994796-v5ljt\" (UID: \"bda22efd-beea-406e-a9d8-cb04fac11b9c\") " pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:49 crc kubenswrapper[4825]: I0122 15:25:49.995048 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flzgt\" (UniqueName: \"kubernetes.io/projected/98362ccb-0056-41c9-b958-ca0a11e30c45-kube-api-access-flzgt\") pod \"catalog-operator-68c6474976-7v44x\" (UID: \"98362ccb-0056-41c9-b958-ca0a11e30c45\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.001541 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.008973 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3290d6b-2824-4136-ac65-df0fa10995d9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4xhdn\" (UID: \"c3290d6b-2824-4136-ac65-df0fa10995d9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.065245 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.072448 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.072800 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.073199 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.083186 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.083487 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.583471741 +0000 UTC m=+97.344998651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.112331 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.116576 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.234165 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttz6m\" (UniqueName: \"kubernetes.io/projected/1cbd0ab2-dc30-46da-8442-206675e887a4-kube-api-access-ttz6m\") pod \"csi-hostpathplugin-9b4mz\" (UID: \"1cbd0ab2-dc30-46da-8442-206675e887a4\") " pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.234523 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.234578 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.234523 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.234909 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.235370 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.235478 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.235752 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.735741143 +0000 UTC m=+97.497268053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.261633 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.320200 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.320849 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pk5\" (UniqueName: \"kubernetes.io/projected/1132329f-90a4-4bcf-a303-28ec140c7c3f-kube-api-access-w9pk5\") pod \"packageserver-d55dfcdfc-64d9l\" (UID: \"1132329f-90a4-4bcf-a303-28ec140c7c3f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.336336 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.336575 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.836559465 +0000 UTC m=+97.598086375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.336746 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.336993 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.836973027 +0000 UTC m=+97.598499937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.343646 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t8cfv"] Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.384220 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.407309 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" event={"ID":"c1667d84-12f5-4cb0-9a46-f69c25bea89d","Type":"ContainerStarted","Data":"2ad4ac12054800652c1f04f8175efeb10f5f54f53c70989061cf022d09331001"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.440026 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.440517 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:50.940501937 +0000 UTC m=+97.702028847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.482370 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" event={"ID":"3f90b820-57dd-4be0-9648-de26783bc914","Type":"ContainerStarted","Data":"0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.482416 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.548413 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.550266 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.050255304 +0000 UTC m=+97.811782214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.573201 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" event={"ID":"5542df19-2024-4e82-a6b4-ba27c678a6f3","Type":"ContainerStarted","Data":"ba4369760b99f560abe0276e5229df6524e3523e23fda6faa700db0321cb53bd"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.575195 4825 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f22rt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.575232 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" podUID="3f90b820-57dd-4be0-9648-de26783bc914" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 22 15:25:50 crc kubenswrapper[4825]: W0122 15:25:50.603182 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d43c37_4152_47d0_be95_a390693902e9.slice/crio-6e1c36df9a34e9c31e62abd8c23652ec080349de7d0f90c3375bd5127ceb521f WatchSource:0}: Error finding container 6e1c36df9a34e9c31e62abd8c23652ec080349de7d0f90c3375bd5127ceb521f: Status 404 returned error can't find the container with id 6e1c36df9a34e9c31e62abd8c23652ec080349de7d0f90c3375bd5127ceb521f Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.614788 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" event={"ID":"72257f30-9f17-4974-aeec-0755be040824","Type":"ContainerStarted","Data":"cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.616170 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.632266 4825 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-77j7k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.632406 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" podUID="72257f30-9f17-4974-aeec-0755be040824" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.649855 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.650549 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.150534871 +0000 UTC m=+97.912061771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.658258 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" event={"ID":"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73","Type":"ContainerStarted","Data":"e9f0ecfb3df06a59e338f902b5b705987cc06341b65adfe1baf65cdb349aee0b"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.658325 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" event={"ID":"6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73","Type":"ContainerStarted","Data":"d940833086302e72790adfa7a72c63ffe604f460355535a2c81175f2de8aac20"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.689233 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" event={"ID":"07701433-aa2e-4b7a-a542-a1c4ecd5135e","Type":"ContainerStarted","Data":"c59f94fb28219c6705683a38aead83d27f73c0f1e6625422263720eae129db3d"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.709002 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7rxw" event={"ID":"bb8c16fb-b627-4b4d-8c02-5f9537eea746","Type":"ContainerStarted","Data":"dfc9d849f6655eb296b71399fab0466803e0dc50859ce2b8ce7b230a4fbc2227"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.710300 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.718484 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.718526 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.757620 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" event={"ID":"e57fb87b-8cec-4c88-a802-69631aef1a2e","Type":"ContainerStarted","Data":"a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.758601 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.760056 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.762525 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.262496611 +0000 UTC m=+98.024023621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.778757 4825 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s7pg5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.778814 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" podUID="e57fb87b-8cec-4c88-a802-69631aef1a2e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.794290 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" event={"ID":"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d","Type":"ContainerStarted","Data":"6e22471d242168dc0e85a197ea35bcc491c6e9e822a90936a1502d544c8e35e1"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.794341 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" event={"ID":"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d","Type":"ContainerStarted","Data":"acec4e10f2ede5ec3a0afab7d0013a413418d250b9fd4bed03fa25e97e06e7ed"} Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.822931 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" podStartSLOduration=73.822917998 podStartE2EDuration="1m13.822917998s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:50.821890219 +0000 UTC m=+97.583417129" watchObservedRunningTime="2026-01-22 15:25:50.822917998 +0000 UTC m=+97.584444908" Jan 22 15:25:50 crc kubenswrapper[4825]: I0122 15:25:50.868970 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:50 crc kubenswrapper[4825]: E0122 15:25:50.869397 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.369380777 +0000 UTC m=+98.130907687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:50.996436 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.008390 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.508359509 +0000 UTC m=+98.269886419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.102071 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.102393 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.602373837 +0000 UTC m=+98.363900747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.203833 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.204166 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.704153156 +0000 UTC m=+98.465680066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.313539 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.313870 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.813850792 +0000 UTC m=+98.575377702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.398882 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" event={"ID":"7b047505-d780-4596-86a8-92c7a3e8a07c","Type":"ContainerStarted","Data":"f9597859792fb00fa20e49f70d180f06c450ba2cc5726acd561217e20bd17e6e"} Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.399808 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.416678 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.425790 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:51.92577062 +0000 UTC m=+98.687297530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.425901 4825 patch_prober.go:28] interesting pod/console-operator-58897d9998-fwkrq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.425959 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" podUID="5b874959-d450-49f1-ab62-1852a45fc258" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.457729 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-82rs5" podStartSLOduration=73.457710643 podStartE2EDuration="1m13.457710643s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:51.454737638 +0000 UTC m=+98.216264538" watchObservedRunningTime="2026-01-22 15:25:51.457710643 +0000 UTC m=+98.219237553" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.468911 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66"] Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.541043 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.541427 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.041412316 +0000 UTC m=+98.802939226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.542761 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" podStartSLOduration=73.542746524 podStartE2EDuration="1m13.542746524s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:51.540180411 +0000 UTC m=+98.301707321" watchObservedRunningTime="2026-01-22 15:25:51.542746524 +0000 UTC m=+98.304273434" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.644486 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.645551 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.145535112 +0000 UTC m=+98.907062022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.666706 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94qsw" podStartSLOduration=74.666686497 podStartE2EDuration="1m14.666686497s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:51.626394585 +0000 UTC m=+98.387921495" watchObservedRunningTime="2026-01-22 15:25:51.666686497 +0000 UTC m=+98.428213427" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.712423 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkbmc" podStartSLOduration=74.712406864 podStartE2EDuration="1m14.712406864s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:51.711488647 +0000 UTC m=+98.473015557" watchObservedRunningTime="2026-01-22 15:25:51.712406864 +0000 UTC m=+98.473933774" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.750016 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.750337 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.250321888 +0000 UTC m=+99.011848798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.752450 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7hzt4"] Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.789197 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9sccv" podStartSLOduration=74.789175908 podStartE2EDuration="1m14.789175908s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:51.760615622 +0000 UTC m=+98.522142532" watchObservedRunningTime="2026-01-22 15:25:51.789175908 +0000 UTC m=+98.550702818" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.825929 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" podStartSLOduration=74.825916108 podStartE2EDuration="1m14.825916108s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:51.822019357 +0000 UTC m=+98.583546267" watchObservedRunningTime="2026-01-22 15:25:51.825916108 +0000 UTC m=+98.587443018" Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.852391 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:51 crc kubenswrapper[4825]: E0122 15:25:51.852919 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.35290316 +0000 UTC m=+99.114430070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.866381 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7t586"] Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.896646 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kv4vj"] Jan 22 15:25:51 crc kubenswrapper[4825]: I0122 15:25:51.953271 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.009107 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.509079354 +0000 UTC m=+99.270606264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.034064 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wbdh2"] Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.057429 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.057948 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.557937131 +0000 UTC m=+99.319464041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.158076 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.158883 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.658852196 +0000 UTC m=+99.420379106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.159287 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.159591 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.659581206 +0000 UTC m=+99.421108116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: W0122 15:25:52.159661 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce8aeac_1477_47a7_88ce_d0a46c66c5d6.slice/crio-378e238b5e33727d7b4ddfa3167462bb6750908c453d92163f546347d3017edd WatchSource:0}: Error finding container 378e238b5e33727d7b4ddfa3167462bb6750908c453d92163f546347d3017edd: Status 404 returned error can't find the container with id 378e238b5e33727d7b4ddfa3167462bb6750908c453d92163f546347d3017edd Jan 22 15:25:52 crc kubenswrapper[4825]: W0122 15:25:52.161139 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d7e321a_a057_40e4_9826_4d9b8b46b30a.slice/crio-2b179fb5335e5aea1c92abc9d89de840f96d33c35976540339ba211d0e83b4e8 WatchSource:0}: Error finding container 2b179fb5335e5aea1c92abc9d89de840f96d33c35976540339ba211d0e83b4e8: Status 404 returned error can't find the container with id 2b179fb5335e5aea1c92abc9d89de840f96d33c35976540339ba211d0e83b4e8 Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.266592 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.267084 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.767058949 +0000 UTC m=+99.528585859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.371399 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqmc8" podStartSLOduration=75.371382041 podStartE2EDuration="1m15.371382041s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:52.251615637 +0000 UTC m=+99.013142547" watchObservedRunningTime="2026-01-22 15:25:52.371382041 +0000 UTC m=+99.132908951" Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.372581 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.373006 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.872993697 +0000 UTC m=+99.634520607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.476803 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.477104 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:52.977088483 +0000 UTC m=+99.738615393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.505214 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x"] Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.513930 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8g6lv"] Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.555379 4825 generic.go:334] "Generic (PLEG): container finished" podID="ba2eb0b7-43ae-49a7-9a19-c969039de168" containerID="ac55e40c8d5f9068de63a066dfe3850b2bffa8b028d3330943af5982ea26ad2e" exitCode=0 Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.556377 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" event={"ID":"ba2eb0b7-43ae-49a7-9a19-c969039de168","Type":"ContainerDied","Data":"ac55e40c8d5f9068de63a066dfe3850b2bffa8b028d3330943af5982ea26ad2e"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.573498 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-f7rxw" podStartSLOduration=75.573480218 podStartE2EDuration="1m15.573480218s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:52.571828011 +0000 UTC m=+99.333354921" watchObservedRunningTime="2026-01-22 15:25:52.573480218 +0000 UTC m=+99.335007128" Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.577919 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.578260 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.078246884 +0000 UTC m=+99.839773804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.684615 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.685306 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.185291884 +0000 UTC m=+99.946818794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.690440 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" event={"ID":"4b829617-fb00-4e94-8cd8-539b13ffc74f","Type":"ContainerStarted","Data":"8722fe4689b993c501d38d9aa30c7403feea891673fb9b1e45ffaccbc3532039"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.690485 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" event={"ID":"4b829617-fb00-4e94-8cd8-539b13ffc74f","Type":"ContainerStarted","Data":"6290a9bba8f2136ad99fdfadcc318b0efe030eb5b1a0dec47a10920549cba361"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.786971 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:52 crc kubenswrapper[4825]: E0122 15:25:52.787624 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.287611809 +0000 UTC m=+100.049138719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.788120 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" event={"ID":"d6ea7686-9c0f-4e31-9d58-05888aedccc1","Type":"ContainerStarted","Data":"8d8fb9d2e42674f1cd3c27b746acb69eb769d7050f4d9717092dd7d5375039c2"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.832142 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" podStartSLOduration=75.832125901 podStartE2EDuration="1m15.832125901s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:52.788463163 +0000 UTC m=+99.549990073" watchObservedRunningTime="2026-01-22 15:25:52.832125901 +0000 UTC m=+99.593652811" Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.833065 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" podStartSLOduration=75.833059788 podStartE2EDuration="1m15.833059788s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:52.830851004 +0000 UTC m=+99.592377914" watchObservedRunningTime="2026-01-22 15:25:52.833059788 +0000 UTC m=+99.594586698" Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.836393 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vpfb7" event={"ID":"247312b2-b2ee-4e5c-bf2d-73dc7f59cc3d","Type":"ContainerStarted","Data":"355a35c105a0486c5ff843eb20569bac68a363b68bb1c30a9c897c7a753184b0"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.850375 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" event={"ID":"1ce5be48-c020-46f1-b576-07d8fb6197db","Type":"ContainerStarted","Data":"6177050d934e663f94ebd1921c29aa52aaaad2fe6267f6d0de86a58925d5425e"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.851306 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qvds8" event={"ID":"81d43c37-4152-47d0-be95-a390693902e9","Type":"ContainerStarted","Data":"be6d79d81a50f8b29a2da1197b70e3f4e89492322f161e08524482d5a188f803"} Jan 22 15:25:52 crc kubenswrapper[4825]: I0122 15:25:52.851324 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qvds8" event={"ID":"81d43c37-4152-47d0-be95-a390693902e9","Type":"ContainerStarted","Data":"6e1c36df9a34e9c31e62abd8c23652ec080349de7d0f90c3375bd5127ceb521f"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:52.998861 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.000156 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.500129144 +0000 UTC m=+100.261656064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.106494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.134754 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.634732751 +0000 UTC m=+100.396259661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.184363 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7t586" event={"ID":"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6","Type":"ContainerStarted","Data":"378e238b5e33727d7b4ddfa3167462bb6750908c453d92163f546347d3017edd"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.246221 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.246446 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.746432124 +0000 UTC m=+100.507959034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.246499 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.246859 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.746853346 +0000 UTC m=+100.508380256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.259278 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-88zfp" podStartSLOduration=76.259261121 podStartE2EDuration="1m16.259261121s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:53.257538652 +0000 UTC m=+100.019065562" watchObservedRunningTime="2026-01-22 15:25:53.259261121 +0000 UTC m=+100.020788031" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.324614 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.347533 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.347936 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.847918335 +0000 UTC m=+100.609445245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.356895 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qvds8" podStartSLOduration=76.356881322 podStartE2EDuration="1m16.356881322s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:53.353435423 +0000 UTC m=+100.114962343" watchObservedRunningTime="2026-01-22 15:25:53.356881322 +0000 UTC m=+100.118408232" Jan 22 15:25:53 crc kubenswrapper[4825]: W0122 15:25:53.376084 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f68db0_1962_4d15_a903_7eb1fb30d414.slice/crio-639bd08a14fe1fac77c03eb2c9e498af6ff99cf87ffd51ebfb671c65e5ced80d WatchSource:0}: Error finding container 639bd08a14fe1fac77c03eb2c9e498af6ff99cf87ffd51ebfb671c65e5ced80d: Status 404 returned error can't find the container with id 639bd08a14fe1fac77c03eb2c9e498af6ff99cf87ffd51ebfb671c65e5ced80d Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.386949 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.394689 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7hzt4" event={"ID":"f5870a5b-0a7c-4b87-8894-d7b88a9864ba","Type":"ContainerStarted","Data":"6fd14a9ae1ad34550ca0bbbedf470a24d9e282a8a64e0105d2c4f72b046be584"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.399159 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" event={"ID":"39491ce6-ed96-48da-92ed-17b549f1da0e","Type":"ContainerStarted","Data":"1d88d2a1accf08f82c97df0b23ea74f1ecefe3499db5698b109176c6046c9f37"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.399655 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.412548 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v5ljt" event={"ID":"bda22efd-beea-406e-a9d8-cb04fac11b9c","Type":"ContainerStarted","Data":"dd1d7e5414f79f29f44b0e3de76830f7e78a1480070472c107e4d9ea873eaeb9"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.412580 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v5ljt" event={"ID":"bda22efd-beea-406e-a9d8-cb04fac11b9c","Type":"ContainerStarted","Data":"bce149f3f3a8f505ab57f56fee01557ebba2e98b957d0b336115e5c0bfcc9723"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.413157 4825 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x9w66 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.413182 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" podUID="39491ce6-ed96-48da-92ed-17b549f1da0e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.426278 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" event={"ID":"4d7e321a-a057-40e4-9826-4d9b8b46b30a","Type":"ContainerStarted","Data":"2b179fb5335e5aea1c92abc9d89de840f96d33c35976540339ba211d0e83b4e8"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.427199 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.444726 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kv4vj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.445255 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.449416 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.449760 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:53.949748786 +0000 UTC m=+100.711275696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.457562 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-857bd" event={"ID":"5226e90c-98ae-4cb1-910e-e8c031ab7c8a","Type":"ContainerStarted","Data":"dc65c5437e197c14f5e261986c8d8b4a7d95ea720040cbc4d3a27307beb5d8af"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.457599 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-857bd" event={"ID":"5226e90c-98ae-4cb1-910e-e8c031ab7c8a","Type":"ContainerStarted","Data":"dca4a66d094c9317cf484ac7e899bf405afff53788ef58292b94f308064e32b1"} Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.466091 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.466196 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.486128 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9b4mz"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.488282 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.500944 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" podStartSLOduration=75.500924019 podStartE2EDuration="1m15.500924019s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:53.467872404 +0000 UTC m=+100.229399314" watchObservedRunningTime="2026-01-22 15:25:53.500924019 +0000 UTC m=+100.262450919" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.507680 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.515996 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.516849 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-v5ljt" podStartSLOduration=76.516834434 podStartE2EDuration="1m16.516834434s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:53.516284878 +0000 UTC m=+100.277811808" watchObservedRunningTime="2026-01-22 15:25:53.516834434 +0000 UTC m=+100.278361344" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.555899 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.557536 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.057516627 +0000 UTC m=+100.819043547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.658462 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.658730 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.658763 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.658783 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fwkrq" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.658796 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z"] Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.659053 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.159041529 +0000 UTC m=+100.920568439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.663265 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" podStartSLOduration=75.663250009 podStartE2EDuration="1m15.663250009s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:53.66224607 +0000 UTC m=+100.423772980" watchObservedRunningTime="2026-01-22 15:25:53.663250009 +0000 UTC m=+100.424776919" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.664523 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-857bd" podStartSLOduration=7.6645159849999995 podStartE2EDuration="7.664515985s" podCreationTimestamp="2026-01-22 15:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:53.61395807 +0000 UTC m=+100.375484980" watchObservedRunningTime="2026-01-22 15:25:53.664515985 +0000 UTC m=+100.426042895" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.798231 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.838513 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.338485528 +0000 UTC m=+101.100012438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.841278 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.843171 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.854497 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.862507 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf"] Jan 22 15:25:53 crc kubenswrapper[4825]: I0122 15:25:53.899937 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:53 crc kubenswrapper[4825]: E0122 15:25:53.900889 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.400877612 +0000 UTC m=+101.162404522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.002690 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.003063 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.503030012 +0000 UTC m=+101.264556932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.003206 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.003630 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.503621719 +0000 UTC m=+101.265148629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.056485 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7"] Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.087301 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn"] Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.104310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.104591 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.604576665 +0000 UTC m=+101.366103565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.114858 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz"] Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.161114 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw"] Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.193874 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x"] Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.205922 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.206284 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.706269712 +0000 UTC m=+101.467796622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.236224 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.242094 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-24pkt"] Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.244341 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:25:54 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:25:54 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:25:54 crc kubenswrapper[4825]: healthz check failed Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.244383 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.367667 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.368230 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.868208641 +0000 UTC m=+101.629735551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.485827 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.486335 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:54.986317567 +0000 UTC m=+101.747844477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.587553 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.587862 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.087848409 +0000 UTC m=+101.849375309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.595278 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" event={"ID":"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c","Type":"ContainerStarted","Data":"3f0d35901e5b02bc468e869cf881ef9538938a9ff1f1aa8794ada2fdca2ab402"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.595321 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" event={"ID":"aeb10bb8-1d41-433a-8f08-2edf3eefaa7c","Type":"ContainerStarted","Data":"f81f51fb6e2181ef2eb24de61371b78e1015c0a9d51995ace29cd0a2f9215bc4"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.640469 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" event={"ID":"39491ce6-ed96-48da-92ed-17b549f1da0e","Type":"ContainerStarted","Data":"d23768dfe6af280b69cd4be3e8d2dc20d7a33a849b228f3a635afb3560915ce5"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.644416 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" event={"ID":"aabad593-88eb-46e8-bd45-c8167c3466ae","Type":"ContainerStarted","Data":"32dce84a04f13238b70b2c85dd1385fede88f6752774430ae4802f48892b5514"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.644447 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" event={"ID":"aabad593-88eb-46e8-bd45-c8167c3466ae","Type":"ContainerStarted","Data":"22388fc38d5e7e6a564b9de18366a9e5c5b8cd3bc073651ed4e1cfd7ed2ded5c"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.650843 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9w66" Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.651038 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" event={"ID":"1cbd0ab2-dc30-46da-8442-206675e887a4","Type":"ContainerStarted","Data":"eb934521507ad40d44290ddb8934a0f67a516e1e272982625027f1216ef4f141"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.678165 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" event={"ID":"9143af67-fb1d-4ef5-a862-bbd9a1afd2d8","Type":"ContainerStarted","Data":"d3acd155da44c6af6cd1299fe9046f5421068d3b0b03cf53ede870fc5733c6ae"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.678212 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" event={"ID":"9143af67-fb1d-4ef5-a862-bbd9a1afd2d8","Type":"ContainerStarted","Data":"3243657f32d13834bda3bec18ae4dbe5260e0b064a4b2067489861d295beb40b"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.688628 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.689739 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.189728111 +0000 UTC m=+101.951255021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.692618 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" event={"ID":"4b829617-fb00-4e94-8cd8-539b13ffc74f","Type":"ContainerStarted","Data":"fdebe2882480b04e97cde20848dcac65d4cb411f3fd16bed1e827a6271b022b6"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.715922 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" event={"ID":"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1","Type":"ContainerStarted","Data":"b64745ad4f204926e4063dce8b582304b547c125d04f4e39f1415e41b0869700"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.729417 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" event={"ID":"c3290d6b-2824-4136-ac65-df0fa10995d9","Type":"ContainerStarted","Data":"bddf839f5f9e78b458732d6110bd247b2c45e0a3c20d96d87185cd86453a4923"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.735012 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" event={"ID":"ba2eb0b7-43ae-49a7-9a19-c969039de168","Type":"ContainerStarted","Data":"3da9b366d22cdf07adb9012ef84d189c6bd1944bfc1eeb144afcc085e45dec77"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.739740 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" event={"ID":"4d7e321a-a057-40e4-9826-4d9b8b46b30a","Type":"ContainerStarted","Data":"78995e6260e6066a0e3a09656206ae1c0e4a7cffdcdf6ee0c7f8b4b74361b63f"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.744454 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kv4vj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.744503 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.809489 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" event={"ID":"1ce5be48-c020-46f1-b576-07d8fb6197db","Type":"ContainerStarted","Data":"d8e5068db8b7307af816555541e527081128d2cfaf91bb1884de81a3a246ae09"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.809877 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.810825 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.310810312 +0000 UTC m=+102.072337222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.825718 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7hzt4" event={"ID":"f5870a5b-0a7c-4b87-8894-d7b88a9864ba","Type":"ContainerStarted","Data":"12e20267e55e22cf86b9ed0fbf8d5a283c9493bf838a25f91e5b1d917bbd1d4b"} Jan 22 15:25:54 crc kubenswrapper[4825]: I0122 15:25:54.927234 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:54 crc kubenswrapper[4825]: E0122 15:25:54.937759 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.437738089 +0000 UTC m=+102.199264999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.013874 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" event={"ID":"b6c94df9-bcdf-40c8-9217-781d33efd3db","Type":"ContainerStarted","Data":"a9b957e8f29ea4330cde7cb78ea5b667451fa3ff400eb4d464e0cae3da419536"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.014248 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jdgh" podStartSLOduration=78.014233676 podStartE2EDuration="1m18.014233676s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:54.811711908 +0000 UTC m=+101.573238848" watchObservedRunningTime="2026-01-22 15:25:55.014233676 +0000 UTC m=+101.775760586" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.016172 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" event={"ID":"d6ea7686-9c0f-4e31-9d58-05888aedccc1","Type":"ContainerStarted","Data":"a50b257dd696cee420dcf5928163b035514978dd78d65819d208cc05d2afbf90"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.018917 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" event={"ID":"e3aa1598-50ce-4efb-98b6-ae06c5ce75af","Type":"ContainerStarted","Data":"312db22dabe0bbe3935c9fdc9e40592d18e45d924c6ee50e3b99369a3bf719a5"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.028839 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.029336 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.529321108 +0000 UTC m=+102.290848018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.043217 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" event={"ID":"2fb87ec5-b498-4153-b880-c42bfdd2089c","Type":"ContainerStarted","Data":"073a057541fd4d95a6cd4d9030ce38699286dd53bb48be8ae5e245dbd019889a"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.046941 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" podStartSLOduration=78.046923771 podStartE2EDuration="1m18.046923771s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.044969085 +0000 UTC m=+101.806495995" watchObservedRunningTime="2026-01-22 15:25:55.046923771 +0000 UTC m=+101.808450681" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.050437 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" podStartSLOduration=77.050422831 podStartE2EDuration="1m17.050422831s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.0171584 +0000 UTC m=+101.778685310" watchObservedRunningTime="2026-01-22 15:25:55.050422831 +0000 UTC m=+101.811949741" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.061402 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" event={"ID":"1132329f-90a4-4bcf-a303-28ec140c7c3f","Type":"ContainerStarted","Data":"2947db8be36b0922d90bfc2e58f28eddd643dc1a545cbcacc22033aa3489278f"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.061448 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" event={"ID":"1132329f-90a4-4bcf-a303-28ec140c7c3f","Type":"ContainerStarted","Data":"e2c452718d05eff993fece24d0d98e4e0fabfea5dd641726a8c67667b7aee6a8"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.062449 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.064217 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64d9l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.064258 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" podUID="1132329f-90a4-4bcf-a303-28ec140c7c3f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.069365 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" event={"ID":"095c6359-a33b-4176-becb-f60758bb28b4","Type":"ContainerStarted","Data":"f5d46615b869be6fe0c448c8090a4c7165e393ca1a9fd8697578a2eb91fadbf6"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.072697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" event={"ID":"40f68db0-1962-4d15-a903-7eb1fb30d414","Type":"ContainerStarted","Data":"639bd08a14fe1fac77c03eb2c9e498af6ff99cf87ffd51ebfb671c65e5ced80d"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.081176 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" event={"ID":"98362ccb-0056-41c9-b958-ca0a11e30c45","Type":"ContainerStarted","Data":"bb862cab6dcf7008dc120dc41f44670f2dc69047bb7332eac84702ef5d6c9e12"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.096425 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ncxt7" podStartSLOduration=77.096408576 podStartE2EDuration="1m17.096408576s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.094419789 +0000 UTC m=+101.855946689" watchObservedRunningTime="2026-01-22 15:25:55.096408576 +0000 UTC m=+101.857935486" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.101048 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" event={"ID":"af488f77-ce41-4294-a385-1b08650660d0","Type":"ContainerStarted","Data":"2f8aeec2d818e9ad2ad69b2842f671d512cac3a7440aab0c987f55e5cf16bc82"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.107407 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" event={"ID":"a05d11ae-f760-4d26-8321-3ff0a38b4177","Type":"ContainerStarted","Data":"64449fc2cc86ef1a0baf596f175af43a07019db614ccdd24004981efea0bdf1a"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.133177 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.134095 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.634083822 +0000 UTC m=+102.395610732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.137886 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" event={"ID":"caae48a6-c8ee-4c56-91cc-fe8f4b21e313","Type":"ContainerStarted","Data":"2ef9724cda4e706129e33b400e248ebe7544f13a0d1082810b7e9b7e1a537434"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.145873 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7t586" event={"ID":"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6","Type":"ContainerStarted","Data":"4192b6c98dd2c47901f6deb53bf1161607616b338eb061145ffbce60e44d4bfc"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.148188 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" event={"ID":"579c0fe7-fbc1-4262-89d1-a8abfb6fc655","Type":"ContainerStarted","Data":"97d69bec738181c6c51a5d9cca69f03a3decbcab2d5dd5cb2da85ad8363c53d3"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.187167 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" podStartSLOduration=77.187146979 podStartE2EDuration="1m17.187146979s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.183858045 +0000 UTC m=+101.945384965" watchObservedRunningTime="2026-01-22 15:25:55.187146979 +0000 UTC m=+101.948673889" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.208706 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wbdh2" podStartSLOduration=77.208690175 podStartE2EDuration="1m17.208690175s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.207350137 +0000 UTC m=+101.968877047" watchObservedRunningTime="2026-01-22 15:25:55.208690175 +0000 UTC m=+101.970217085" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.230885 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" event={"ID":"7df2707d-94e8-4d29-84e8-14a50058f164","Type":"ContainerStarted","Data":"c868ea86d7c0b85e7ab3b79f870a122337cd434d70e77421d7bff4571214c823"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.230924 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" event={"ID":"7df2707d-94e8-4d29-84e8-14a50058f164","Type":"ContainerStarted","Data":"893bc692c5accdf90ec48ab27fcd4190154ac83e867794b065964bcb52d54278"} Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.231848 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.231916 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.234717 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.236220 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.736193271 +0000 UTC m=+102.497720201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.238775 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" podStartSLOduration=77.238760474 podStartE2EDuration="1m17.238760474s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.236273583 +0000 UTC m=+101.997800493" watchObservedRunningTime="2026-01-22 15:25:55.238760474 +0000 UTC m=+102.000287384" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.242314 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:25:55 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:25:55 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:25:55 crc kubenswrapper[4825]: healthz check failed Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.242363 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.343920 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.373269 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:55.863937113 +0000 UTC m=+102.625464023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.380533 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-t8cfv" podStartSLOduration=78.380512406 podStartE2EDuration="1m18.380512406s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.380035483 +0000 UTC m=+102.141562403" watchObservedRunningTime="2026-01-22 15:25:55.380512406 +0000 UTC m=+102.142039316" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.507458 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.515433 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.015409993 +0000 UTC m=+102.776936903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.515524 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.515920 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.015911257 +0000 UTC m=+102.777438167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.598391 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7hzt4" podStartSLOduration=9.598375344 podStartE2EDuration="9.598375344s" podCreationTimestamp="2026-01-22 15:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:55.597807918 +0000 UTC m=+102.359334838" watchObservedRunningTime="2026-01-22 15:25:55.598375344 +0000 UTC m=+102.359902244" Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.622368 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.622803 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.122784172 +0000 UTC m=+102.884311082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.813868 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.814356 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.314339838 +0000 UTC m=+103.075866748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.914526 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.914786 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:55 crc kubenswrapper[4825]: E0122 15:25:55.917063 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.417043953 +0000 UTC m=+103.178570863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:55 crc kubenswrapper[4825]: I0122 15:25:55.920040 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538e3056-0e80-4b71-ada6-b7440b283761-metrics-certs\") pod \"network-metrics-daemon-hrdl8\" (UID: \"538e3056-0e80-4b71-ada6-b7440b283761\") " pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.016655 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.017027 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.51697096 +0000 UTC m=+103.278497870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.144501 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrdl8" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.144868 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.145162 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.645145384 +0000 UTC m=+103.406672294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.255068 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.255503 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.755486658 +0000 UTC m=+103.517013568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.355692 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.356026 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.856012162 +0000 UTC m=+103.617539072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.360258 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" event={"ID":"9143af67-fb1d-4ef5-a862-bbd9a1afd2d8","Type":"ContainerStarted","Data":"cdccc9a13437167572cff43b84869977f37222837aa83810364c3372182e1e7a"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.362142 4825 generic.go:334] "Generic (PLEG): container finished" podID="2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1" containerID="2a7f819eb1f7f08e3baaff018a327951f5952ad97e209f4a1119d374b97c14e0" exitCode=0 Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.362324 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" event={"ID":"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1","Type":"ContainerDied","Data":"2a7f819eb1f7f08e3baaff018a327951f5952ad97e209f4a1119d374b97c14e0"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.363158 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:25:56 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:25:56 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:25:56 crc kubenswrapper[4825]: healthz check failed Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.363188 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.366006 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" event={"ID":"98362ccb-0056-41c9-b958-ca0a11e30c45","Type":"ContainerStarted","Data":"c38496316a3545b5ecf85fe84e5d22d93ea5339141cd72269919a89e4626e1bf"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.366557 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.367689 4825 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7v44x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.367729 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" podUID="98362ccb-0056-41c9-b958-ca0a11e30c45" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.457330 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4g57x" podStartSLOduration=78.457316117 podStartE2EDuration="1m18.457316117s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.457189914 +0000 UTC m=+103.218716824" watchObservedRunningTime="2026-01-22 15:25:56.457316117 +0000 UTC m=+103.218843027" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.459570 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.460507 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:56.960496538 +0000 UTC m=+103.722023448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.504575 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" event={"ID":"a05d11ae-f760-4d26-8321-3ff0a38b4177","Type":"ContainerStarted","Data":"f5d18b4cdeb4842b6abe4da79daa572fb35b3dc52fd8fcd8a01ddd6587be6a34"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.528360 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" event={"ID":"b6c94df9-bcdf-40c8-9217-781d33efd3db","Type":"ContainerStarted","Data":"1bed5c38555e6e86340e1f3c492e136ee8be2518afff535821da931b9d7fa16d"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.560098 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.560327 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.060312961 +0000 UTC m=+103.821839871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.561241 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" event={"ID":"c3290d6b-2824-4136-ac65-df0fa10995d9","Type":"ContainerStarted","Data":"48e29a2d90c2aaf8f617ef4ffe4b6e259c974f1e25ac55ab7d0e21a91e5795bf"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.595535 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" podStartSLOduration=78.595505068 podStartE2EDuration="1m18.595505068s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.590570327 +0000 UTC m=+103.352097237" watchObservedRunningTime="2026-01-22 15:25:56.595505068 +0000 UTC m=+103.357031988" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.619581 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" event={"ID":"095c6359-a33b-4176-becb-f60758bb28b4","Type":"ContainerStarted","Data":"8959736ab5235769636d70f0c22861b9a8a17bc39d841bdefff03843ff15a294"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.646493 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" event={"ID":"e3aa1598-50ce-4efb-98b6-ae06c5ce75af","Type":"ContainerStarted","Data":"d0d0deec90912bc6d3fa69f3620451e8fe0c0024a29c0f52113cd1dc6ccdff15"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.666834 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.667113 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.167101284 +0000 UTC m=+103.928628194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.679840 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" event={"ID":"2fb87ec5-b498-4153-b880-c42bfdd2089c","Type":"ContainerStarted","Data":"8afd0aff2f200ffd2cf88e3aba9e5a44ed2970cc398620e87a6ccb113c28b1cd"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.679885 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" event={"ID":"2fb87ec5-b498-4153-b880-c42bfdd2089c","Type":"ContainerStarted","Data":"fdd9d0e16916a213fdf8e837c982c115d0c4e6e679b6ece544df6ee2e3b40417"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.680524 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.682103 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" event={"ID":"40f68db0-1962-4d15-a903-7eb1fb30d414","Type":"ContainerStarted","Data":"27f17004ca67f83d4a5dead55cb67438c6a2259f1d3ba1495e5f2293f0ba6432"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.682139 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" event={"ID":"40f68db0-1962-4d15-a903-7eb1fb30d414","Type":"ContainerStarted","Data":"5c63257ba57a73679421429f3ab910d16d892cf108fde807d6ed8a55109758d4"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.685039 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gzl8z" event={"ID":"af488f77-ce41-4294-a385-1b08650660d0","Type":"ContainerStarted","Data":"6e10baa69d4bb84d037208613a3d8ef22e582dc8c1300b15907ec394415b77ca"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.738635 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" event={"ID":"caae48a6-c8ee-4c56-91cc-fe8f4b21e313","Type":"ContainerStarted","Data":"0b3cf3720325cc7d91844eb51ef35e896132c208ab98b5ec8eb68cf404526e03"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.740701 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7t586" event={"ID":"2ce8aeac-1477-47a7-88ce-d0a46c66c5d6","Type":"ContainerStarted","Data":"b01d76285957a5a755b6299024279fde92dad1363ac9df6fb4d76a78cae16154"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.741194 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7t586" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.742228 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" event={"ID":"288b665c-d9e3-4f3c-93f5-4632d98b9028","Type":"ContainerStarted","Data":"e0818fc8def33a3c3eebda75d7910a1acf1611b7f70b9d8df33ae6c14b9d8fc3"} Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.758087 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64d9l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.758111 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kv4vj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.758142 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" podUID="1132329f-90a4-4bcf-a303-28ec140c7c3f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.758178 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.769099 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.770567 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.270552121 +0000 UTC m=+104.032079031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.785865 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4xhdn" podStartSLOduration=78.785850209 podStartE2EDuration="1m18.785850209s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.627335767 +0000 UTC m=+103.388862677" watchObservedRunningTime="2026-01-22 15:25:56.785850209 +0000 UTC m=+103.547377119" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.790539 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xhfx7" podStartSLOduration=78.790521762 podStartE2EDuration="1m18.790521762s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.779316192 +0000 UTC m=+103.540843092" watchObservedRunningTime="2026-01-22 15:25:56.790521762 +0000 UTC m=+103.552048672" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.872925 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7t586" podStartSLOduration=10.872911937 podStartE2EDuration="10.872911937s" podCreationTimestamp="2026-01-22 15:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.837488265 +0000 UTC m=+103.599015175" watchObservedRunningTime="2026-01-22 15:25:56.872911937 +0000 UTC m=+103.634438847" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.874017 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-96g5g" podStartSLOduration=78.874011819 podStartE2EDuration="1m18.874011819s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.871900318 +0000 UTC m=+103.633427228" watchObservedRunningTime="2026-01-22 15:25:56.874011819 +0000 UTC m=+103.635538729" Jan 22 15:25:56 crc kubenswrapper[4825]: I0122 15:25:56.878560 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:56 crc kubenswrapper[4825]: E0122 15:25:56.878825 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.378814546 +0000 UTC m=+104.140341456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.048374 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.050239 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qvrsf" podStartSLOduration=79.050229366 podStartE2EDuration="1m19.050229366s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:56.95206126 +0000 UTC m=+103.713588170" watchObservedRunningTime="2026-01-22 15:25:57.050229366 +0000 UTC m=+103.811756276" Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.051088 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.55107588 +0000 UTC m=+104.312602790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.150959 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.151565 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.651551752 +0000 UTC m=+104.413078662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.284852 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.286145 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.286464 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.286767 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.786748207 +0000 UTC m=+104.548275117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.304636 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:25:57 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:25:57 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:25:57 crc kubenswrapper[4825]: healthz check failed Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.304713 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.315958 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" podStartSLOduration=80.315934791 podStartE2EDuration="1m20.315934791s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:57.135110602 +0000 UTC m=+103.896637512" watchObservedRunningTime="2026-01-22 15:25:57.315934791 +0000 UTC m=+104.077461701" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.318184 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" podStartSLOduration=79.318173655 podStartE2EDuration="1m19.318173655s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:57.304386271 +0000 UTC m=+104.065913191" watchObservedRunningTime="2026-01-22 15:25:57.318173655 +0000 UTC m=+104.079700565" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.397847 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.400275 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:57.900260162 +0000 UTC m=+104.661787072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.502817 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.503256 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.003225335 +0000 UTC m=+104.764752245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.605643 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.606139 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.106127527 +0000 UTC m=+104.867654437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.635692 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hrdl8"] Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.706607 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.707318 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.207281988 +0000 UTC m=+104.968808898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.833296 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.833573 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.333562088 +0000 UTC m=+105.095088998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.841720 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.841754 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.916684 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" event={"ID":"538e3056-0e80-4b71-ada6-b7440b283761","Type":"ContainerStarted","Data":"d44d6acef9ea59d1379c7d8cee00063f5f9153738d7eb4170edb88318b5eab45"} Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.925885 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" event={"ID":"288b665c-d9e3-4f3c-93f5-4632d98b9028","Type":"ContainerStarted","Data":"68b89feeb90121c006d1f258db05dfb4fe304ecf2846b0470de1282c82f0a1ef"} Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.948449 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.948573 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.448554395 +0000 UTC m=+105.210081305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.948606 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" event={"ID":"7df2707d-94e8-4d29-84e8-14a50058f164","Type":"ContainerStarted","Data":"2b504c1e120c540b399133ccc72535797404de87407d64ccf06e27bbb1375d8c"} Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.948643 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:57 crc kubenswrapper[4825]: E0122 15:25:57.949005 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.448991407 +0000 UTC m=+105.210518307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:57 crc kubenswrapper[4825]: I0122 15:25:57.953824 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" event={"ID":"a05d11ae-f760-4d26-8321-3ff0a38b4177","Type":"ContainerStarted","Data":"28d866e184b9a20b2592856d1efbaf930852cf8cef891bdd4825a3a5c3d2a32d"} Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.044112 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.044167 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.044225 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.044276 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.044481 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" event={"ID":"1cbd0ab2-dc30-46da-8442-206675e887a4","Type":"ContainerStarted","Data":"676e42eff1513bb1fda44706a1869748eca28e85e473a72f0a7ddf98b2df2096"} Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.048848 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8g6lv" podStartSLOduration=80.048832831 podStartE2EDuration="1m20.048832831s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:58.04526703 +0000 UTC m=+104.806793940" watchObservedRunningTime="2026-01-22 15:25:58.048832831 +0000 UTC m=+104.810359741" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.051726 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.052191 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.552159197 +0000 UTC m=+105.313686107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.057773 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.058134 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.558124657 +0000 UTC m=+105.319651567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.075171 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" event={"ID":"579c0fe7-fbc1-4262-89d1-a8abfb6fc655","Type":"ContainerStarted","Data":"e181753df5005dd5fd551820dbe8755d74ba5728effffdd919258d897001fd4e"} Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.098593 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ckqzw" podStartSLOduration=80.098577093 podStartE2EDuration="1m20.098577093s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:58.097441951 +0000 UTC m=+104.858968871" watchObservedRunningTime="2026-01-22 15:25:58.098577093 +0000 UTC m=+104.860104003" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.139823 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2wkw" podStartSLOduration=80.139806192 podStartE2EDuration="1m20.139806192s" podCreationTimestamp="2026-01-22 15:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:58.138777353 +0000 UTC m=+104.900304283" watchObservedRunningTime="2026-01-22 15:25:58.139806192 +0000 UTC m=+104.901333102" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.160811 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.162683 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.662665175 +0000 UTC m=+105.424192085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.182356 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" event={"ID":"b6c94df9-bcdf-40c8-9217-781d33efd3db","Type":"ContainerStarted","Data":"0f496f156c11f036150cd292966f29a6f5ec62cdc2bf59b2a1e3b374abcf7ea5"} Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.206873 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" event={"ID":"2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1","Type":"ContainerStarted","Data":"f09aafd78b856a3a5855fee636127dcdb4f524108eee70eadf758429317b2112"} Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.208663 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.209589 4825 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7v44x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.209709 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" podUID="98362ccb-0056-41c9-b958-ca0a11e30c45" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.238273 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8lt5" podStartSLOduration=81.238254536 podStartE2EDuration="1m21.238254536s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:58.235999352 +0000 UTC m=+104.997526262" watchObservedRunningTime="2026-01-22 15:25:58.238254536 +0000 UTC m=+104.999781446" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.240406 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:25:58 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:25:58 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:25:58 crc kubenswrapper[4825]: healthz check failed Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.240691 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.284087 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.286837 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.786821754 +0000 UTC m=+105.548348664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.324103 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" podStartSLOduration=81.32408424 podStartE2EDuration="1m21.32408424s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:58.322057292 +0000 UTC m=+105.083584212" watchObservedRunningTime="2026-01-22 15:25:58.32408424 +0000 UTC m=+105.085611150" Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.400993 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.401358 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.901339788 +0000 UTC m=+105.662866698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.401915 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.402334 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:58.902323156 +0000 UTC m=+105.663850066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.503821 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.003794736 +0000 UTC m=+105.765321646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.503729 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.504330 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.505045 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.005033141 +0000 UTC m=+105.766560051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.569661 4825 csr.go:261] certificate signing request csr-b7gb8 is approved, waiting to be issued Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.636772 4825 csr.go:257] certificate signing request csr-b7gb8 is issued Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.638404 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.638860 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.138830715 +0000 UTC m=+105.900357625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.778574 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.778961 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.278940061 +0000 UTC m=+106.040466971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.919543 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.919752 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.419730045 +0000 UTC m=+106.181256955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:58 crc kubenswrapper[4825]: I0122 15:25:58.920334 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:58 crc kubenswrapper[4825]: E0122 15:25:58.920734 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.420718054 +0000 UTC m=+106.182244964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.142771 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.143317 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.643284676 +0000 UTC m=+106.404811586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.143454 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.144000 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.643957995 +0000 UTC m=+106.405484905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.208161 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64d9l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.208222 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" podUID="1132329f-90a4-4bcf-a303-28ec140c7c3f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.208665 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.208904 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.258150 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:25:59 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:25:59 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:25:59 crc kubenswrapper[4825]: healthz check failed Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.258201 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.322740 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.323771 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.823750984 +0000 UTC m=+106.585277894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.331139 4825 patch_prober.go:28] interesting pod/console-f9d7485db-qvds8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.331207 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qvds8" podUID="81d43c37-4152-47d0-be95-a390693902e9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.372275 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.380611 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" event={"ID":"288b665c-d9e3-4f3c-93f5-4632d98b9028","Type":"ContainerStarted","Data":"a88f8b70f08892ac1da32dfe15e4a0b0c36ff51b7a48ecee414c42bf6595c79b"} Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.389992 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" event={"ID":"538e3056-0e80-4b71-ada6-b7440b283761","Type":"ContainerStarted","Data":"f6589b28474101cd7a3cffecff89952d9f017381fa60c70f4255262ea048a8f4"} Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.425755 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.426122 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:25:59.9261042 +0000 UTC m=+106.687631110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.606949 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7v44x" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.607768 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.608651 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.108631318 +0000 UTC m=+106.870158228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.639070 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 15:20:58 +0000 UTC, rotation deadline is 2026-10-20 09:55:57.092764514 +0000 UTC Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.639108 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6498h29m57.453658625s for next certificate rotation Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.682117 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-24pkt" podStartSLOduration=82.682100678 podStartE2EDuration="1m22.682100678s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:25:59.680012658 +0000 UTC m=+106.441539568" watchObservedRunningTime="2026-01-22 15:25:59.682100678 +0000 UTC m=+106.443627588" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.711351 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.712661 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.212649571 +0000 UTC m=+106.974176481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.746096 4825 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-sbkgn container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]log ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]etcd ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 15:25:59 crc kubenswrapper[4825]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Jan 22 15:25:59 crc kubenswrapper[4825]: [+]poststarthook/max-in-flight-filter ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]poststarthook/openshift.io-StartUserInformer ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]poststarthook/openshift.io-StartOAuthInformer ok Jan 22 15:25:59 crc kubenswrapper[4825]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Jan 22 15:25:59 crc kubenswrapper[4825]: livez check failed Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.746151 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" podUID="ba2eb0b7-43ae-49a7-9a19-c969039de168" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.812162 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.812268 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.312252548 +0000 UTC m=+107.073779458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.812468 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.812711 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.312704551 +0000 UTC m=+107.074231461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:25:59 crc kubenswrapper[4825]: I0122 15:25:59.969862 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:25:59 crc kubenswrapper[4825]: E0122 15:25:59.970459 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.4704427 +0000 UTC m=+107.231969610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.071526 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.071861 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.571846779 +0000 UTC m=+107.333373689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.172970 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.173724 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.673702511 +0000 UTC m=+107.435229421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.262160 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.273066 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:00 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:00 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:00 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.273117 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.276900 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.277197 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.777186379 +0000 UTC m=+107.538713289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.377717 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.378468 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.878450703 +0000 UTC m=+107.639977613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.479115 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.479456 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:00.97944204 +0000 UTC m=+107.740968950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.504092 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrdl8" event={"ID":"538e3056-0e80-4b71-ada6-b7440b283761","Type":"ContainerStarted","Data":"fe842d780bd373b553cee88d7837f752691bcebfc6eeecf6f332f49a631599d7"} Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.511882 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" event={"ID":"1cbd0ab2-dc30-46da-8442-206675e887a4","Type":"ContainerStarted","Data":"117384ce1ee8679dabd3e56bac0b94dfcdd345ce4911da15ee8caa0ad74797f1"} Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.534676 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hrdl8" podStartSLOduration=83.534655438 podStartE2EDuration="1m23.534655438s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:26:00.530128049 +0000 UTC m=+107.291654959" watchObservedRunningTime="2026-01-22 15:26:00.534655438 +0000 UTC m=+107.296182348" Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.693928 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.694333 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.194314032 +0000 UTC m=+107.955840942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.711555 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.796370 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.797925 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.297913304 +0000 UTC m=+108.059440214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.915107 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.915237 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.415210567 +0000 UTC m=+108.176737467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.915343 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:00 crc kubenswrapper[4825]: E0122 15:26:00.915692 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.41567391 +0000 UTC m=+108.177200830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.923565 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sw69k"] Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.924620 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:00 crc kubenswrapper[4825]: I0122 15:26:00.928149 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.044661 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.046008 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.545984805 +0000 UTC m=+108.307511715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.076523 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw69k"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.085951 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ktlrw"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.087121 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.088829 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktlrw"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.098823 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.172722 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.172772 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-catalog-content\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.172815 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqd4\" (UniqueName: \"kubernetes.io/projected/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-kube-api-access-wcqd4\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.172841 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-utilities\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.173211 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.673193251 +0000 UTC m=+108.434720161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.174468 4825 patch_prober.go:28] interesting pod/apiserver-76f77b778f-phwjz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]log ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]etcd ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/max-in-flight-filter ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 15:26:01 crc kubenswrapper[4825]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 15:26:01 crc kubenswrapper[4825]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/openshift.io-startinformers ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 15:26:01 crc kubenswrapper[4825]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 15:26:01 crc kubenswrapper[4825]: livez check failed Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.174513 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" podUID="aeb10bb8-1d41-433a-8f08-2edf3eefaa7c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.193580 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnr8n"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.194616 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.238792 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:01 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:01 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:01 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.238876 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.274499 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.274800 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9l7\" (UniqueName: \"kubernetes.io/projected/43675eea-b514-472a-9f19-d93ec4ddf044-kube-api-access-xb9l7\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.274871 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-catalog-content\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.274904 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-catalog-content\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.274938 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-utilities\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.274999 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqd4\" (UniqueName: \"kubernetes.io/projected/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-kube-api-access-wcqd4\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.275029 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-utilities\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.275492 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-utilities\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.275565 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.775550027 +0000 UTC m=+108.537076937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.275844 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-catalog-content\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.310385 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqd4\" (UniqueName: \"kubernetes.io/projected/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-kube-api-access-wcqd4\") pod \"community-operators-sw69k\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.374344 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383459 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-catalog-content\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383533 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-catalog-content\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383554 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383573 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-utilities\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383630 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szwxg\" (UniqueName: \"kubernetes.io/projected/c7b87096-547e-442b-9701-4e6a222ce547-kube-api-access-szwxg\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383681 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9l7\" (UniqueName: \"kubernetes.io/projected/43675eea-b514-472a-9f19-d93ec4ddf044-kube-api-access-xb9l7\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.383698 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-utilities\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.384159 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-catalog-content\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.384392 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.884380698 +0000 UTC m=+108.645907608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.384784 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-utilities\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.413087 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9l7\" (UniqueName: \"kubernetes.io/projected/43675eea-b514-472a-9f19-d93ec4ddf044-kube-api-access-xb9l7\") pod \"certified-operators-ktlrw\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.430053 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zzxc9"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.431331 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.442619 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzxc9"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.476091 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnr8n"] Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.487372 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.487801 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szwxg\" (UniqueName: \"kubernetes.io/projected/c7b87096-547e-442b-9701-4e6a222ce547-kube-api-access-szwxg\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.487866 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-utilities\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.487898 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-catalog-content\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.488393 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:01.988375031 +0000 UTC m=+108.749901941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.488306 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-catalog-content\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.488500 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-utilities\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.510577 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szwxg\" (UniqueName: \"kubernetes.io/projected/c7b87096-547e-442b-9701-4e6a222ce547-kube-api-access-szwxg\") pod \"community-operators-wnr8n\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.582777 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.592230 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5sb\" (UniqueName: \"kubernetes.io/projected/1ae2b938-4488-405a-bba8-4693edacafc8-kube-api-access-bk5sb\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.592351 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-catalog-content\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.592559 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.592623 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-utilities\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.593267 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.093253959 +0000 UTC m=+108.854780869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.696327 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.696493 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-catalog-content\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.696588 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-utilities\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.696612 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5sb\" (UniqueName: \"kubernetes.io/projected/1ae2b938-4488-405a-bba8-4693edacafc8-kube-api-access-bk5sb\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.697272 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.197258052 +0000 UTC m=+108.958784962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.697653 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-catalog-content\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.697880 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-utilities\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.709686 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.761924 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw69k"] Jan 22 15:26:01 crc kubenswrapper[4825]: W0122 15:26:01.778154 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa62c6e7_0cb9_4c7e_8885_a2bf0d0e1104.slice/crio-e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd WatchSource:0}: Error finding container e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd: Status 404 returned error can't find the container with id e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.797901 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.798587 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.298565948 +0000 UTC m=+109.060092928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.806666 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.941962 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.942291 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.442262446 +0000 UTC m=+109.203789366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:01 crc kubenswrapper[4825]: I0122 15:26:01.942437 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:01 crc kubenswrapper[4825]: E0122 15:26:01.942798 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.44277942 +0000 UTC m=+109.204306330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.043201 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.043368 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.543342265 +0000 UTC m=+109.304869175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.043564 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.043972 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.543955523 +0000 UTC m=+109.305482423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.100479 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktlrw"] Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.144536 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.144732 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.644702611 +0000 UTC m=+109.406229521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.144940 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.145199 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.645192275 +0000 UTC m=+109.406719185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.176967 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnr8n"] Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.239721 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:02 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:02 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:02 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.239787 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.253741 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.254186 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.754167961 +0000 UTC m=+109.515694871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.285064 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.290314 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-phwjz" Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.356875 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.357849 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.857833564 +0000 UTC m=+109.619360584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.457560 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.457738 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.957709079 +0000 UTC m=+109.719235989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.457814 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.458358 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:02.958348777 +0000 UTC m=+109.719875687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.497587 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5sb\" (UniqueName: \"kubernetes.io/projected/1ae2b938-4488-405a-bba8-4693edacafc8-kube-api-access-bk5sb\") pod \"certified-operators-zzxc9\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.548696 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnr8n" event={"ID":"c7b87096-547e-442b-9701-4e6a222ce547","Type":"ContainerStarted","Data":"de4ea97cb84d102ec9ab9befb91e6c504fecc3a058cce914003091c286d8ef0f"} Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.548740 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnr8n" event={"ID":"c7b87096-547e-442b-9701-4e6a222ce547","Type":"ContainerStarted","Data":"c7fd54f869b6c06955a89b9d5cf3a80b4e3a8c84e8ebac44bfffb9a66325e943"} Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.564744 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.565028 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.065014306 +0000 UTC m=+109.826541216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.575413 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerStarted","Data":"4038b9197b5ba86e9d2f5c3818f4406ba7ea92a471b51adab61ea8b006986ee0"} Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.579569 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" event={"ID":"1cbd0ab2-dc30-46da-8442-206675e887a4","Type":"ContainerStarted","Data":"7fda88878d95100a363844c602d7626983c0e393bf35e64e46dc0f2d7582478e"} Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.581542 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerStarted","Data":"da163c98ade44996898e0e4a4560fb51ab37ddfd1522dc2adf50275c39cc06fc"} Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.581559 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerStarted","Data":"e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd"} Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.718830 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.719293 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.731802 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.231775543 +0000 UTC m=+109.993302453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.822799 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:02 crc kubenswrapper[4825]: E0122 15:26:02.825409 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.325383349 +0000 UTC m=+110.086910269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.826572 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:02 crc kubenswrapper[4825]: I0122 15:26:02.873495 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.007100 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.507075623 +0000 UTC m=+110.268602533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.137836 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.138656 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.638640024 +0000 UTC m=+110.400166934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.146301 4825 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-wvhkx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.146367 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" podUID="2f9a9d41-e120-4f5d-ac4a-6618ba0b19a1" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.159065 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvfdl"] Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.160141 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.160789 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbkgn" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.187703 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvfdl"] Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.250585 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.250997 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.750965904 +0000 UTC m=+110.512492814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.310743 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:03 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:03 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:03 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.310794 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.327288 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.436600 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.436709 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-utilities\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.436771 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z826h\" (UniqueName: \"kubernetes.io/projected/44760e51-861e-4283-9593-8832b2d55847-kube-api-access-z826h\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.436810 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-catalog-content\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.436859 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.936836868 +0000 UTC m=+110.698363778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.436903 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.437361 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:03.937353832 +0000 UTC m=+110.698880742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.538186 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qql6g"] Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.539365 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.539587 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-catalog-content\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.539686 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-utilities\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.539730 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z826h\" (UniqueName: \"kubernetes.io/projected/44760e51-861e-4283-9593-8832b2d55847-kube-api-access-z826h\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.539733 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.540102 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.040085059 +0000 UTC m=+110.801611969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.540422 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-catalog-content\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.540494 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-utilities\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.563510 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qql6g"] Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.592765 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z826h\" (UniqueName: \"kubernetes.io/projected/44760e51-861e-4283-9593-8832b2d55847-kube-api-access-z826h\") pod \"redhat-marketplace-pvfdl\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.603899 4825 generic.go:334] "Generic (PLEG): container finished" podID="c7b87096-547e-442b-9701-4e6a222ce547" containerID="de4ea97cb84d102ec9ab9befb91e6c504fecc3a058cce914003091c286d8ef0f" exitCode=0 Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.604159 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnr8n" event={"ID":"c7b87096-547e-442b-9701-4e6a222ce547","Type":"ContainerDied","Data":"de4ea97cb84d102ec9ab9befb91e6c504fecc3a058cce914003091c286d8ef0f"} Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.611403 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.618306 4825 generic.go:334] "Generic (PLEG): container finished" podID="43675eea-b514-472a-9f19-d93ec4ddf044" containerID="0448306a6563ea153e461d2f45c8adbf34bd44b42c47525cbc03f6149ffa68e0" exitCode=0 Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.618386 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerDied","Data":"0448306a6563ea153e461d2f45c8adbf34bd44b42c47525cbc03f6149ffa68e0"} Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.619957 4825 generic.go:334] "Generic (PLEG): container finished" podID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerID="da163c98ade44996898e0e4a4560fb51ab37ddfd1522dc2adf50275c39cc06fc" exitCode=0 Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.620346 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerDied","Data":"da163c98ade44996898e0e4a4560fb51ab37ddfd1522dc2adf50275c39cc06fc"} Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.649359 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-utilities\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.649412 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.649455 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-catalog-content\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.649601 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/fb2f5594-0942-47ab-be72-76cc52b73a6d-kube-api-access-59d6q\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.650121 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.150104304 +0000 UTC m=+110.911631214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.750588 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.750862 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-utilities\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.750997 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-catalog-content\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.751075 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/fb2f5594-0942-47ab-be72-76cc52b73a6d-kube-api-access-59d6q\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.752154 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-catalog-content\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.752207 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-utilities\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.752400 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.252369287 +0000 UTC m=+111.013896197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.824238 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/fb2f5594-0942-47ab-be72-76cc52b73a6d-kube-api-access-59d6q\") pod \"redhat-marketplace-qql6g\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.840288 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.851795 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:03 crc kubenswrapper[4825]: E0122 15:26:03.852185 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.3521669 +0000 UTC m=+111.113693820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:03 crc kubenswrapper[4825]: I0122 15:26:03.858304 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.060242 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.060789 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.560775173 +0000 UTC m=+111.322302073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.076013 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r887w"] Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.076950 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.080821 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.093176 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r887w"] Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.161584 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.161926 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.661912924 +0000 UTC m=+111.423439834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.303780 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.303889 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxd2\" (UniqueName: \"kubernetes.io/projected/9d0229a3-015e-43bf-bcb3-32088de6e95c-kube-api-access-llxd2\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.303943 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-utilities\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.303958 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-catalog-content\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.304099 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.804087658 +0000 UTC m=+111.565614558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.393127 4825 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.406129 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxd2\" (UniqueName: \"kubernetes.io/projected/9d0229a3-015e-43bf-bcb3-32088de6e95c-kube-api-access-llxd2\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.406194 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.406232 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-utilities\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.406256 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-catalog-content\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.406862 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-catalog-content\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.407490 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:04.907475944 +0000 UTC m=+111.669002854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.407764 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-utilities\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.407886 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q545b"] Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.419651 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.455833 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:04 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:04 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:04 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.455940 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.510424 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:05.010402576 +0000 UTC m=+111.771929476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.511092 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.512641 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-utilities\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.512742 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-catalog-content\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.512775 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pd6q\" (UniqueName: \"kubernetes.io/projected/b5076053-c905-48d9-b10a-c720e82c9cee-kube-api-access-5pd6q\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.512811 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.511834 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxd2\" (UniqueName: \"kubernetes.io/projected/9d0229a3-015e-43bf-bcb3-32088de6e95c-kube-api-access-llxd2\") pod \"redhat-operators-r887w\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.513138 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:05.013122894 +0000 UTC m=+111.774649804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.759100 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.759506 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-utilities\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.759631 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-catalog-content\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.759666 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pd6q\" (UniqueName: \"kubernetes.io/projected/b5076053-c905-48d9-b10a-c720e82c9cee-kube-api-access-5pd6q\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.760545 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.765654 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q545b"] Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.767057 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-catalog-content\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.767646 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-utilities\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.769237 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:05.269189373 +0000 UTC m=+112.030716283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.786443 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" event={"ID":"1cbd0ab2-dc30-46da-8442-206675e887a4","Type":"ContainerStarted","Data":"f951016a94d0c57a141264ad63f32be727318fe1d9e596f5f8f8b76c0b8edf53"} Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.829466 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7t586" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.855914 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pd6q\" (UniqueName: \"kubernetes.io/projected/b5076053-c905-48d9-b10a-c720e82c9cee-kube-api-access-5pd6q\") pod \"redhat-operators-q545b\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.868058 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:04 crc kubenswrapper[4825]: E0122 15:26:04.868351 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:05.368340288 +0000 UTC m=+112.129867198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:04 crc kubenswrapper[4825]: I0122 15:26:04.882563 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9b4mz" podStartSLOduration=18.882546924 podStartE2EDuration="18.882546924s" podCreationTimestamp="2026-01-22 15:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:26:04.880444614 +0000 UTC m=+111.641971524" watchObservedRunningTime="2026-01-22 15:26:04.882546924 +0000 UTC m=+111.644073834" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.048158 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:05 crc kubenswrapper[4825]: E0122 15:26:05.048962 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 15:26:05.54894684 +0000 UTC m=+112.310473750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.103942 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.141381 4825 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T15:26:04.393312199Z","Handler":null,"Name":""} Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.147886 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wvhkx" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.149108 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:05 crc kubenswrapper[4825]: E0122 15:26:05.149415 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 15:26:05.649405552 +0000 UTC m=+112.410932462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9s9f" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.205377 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvfdl"] Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.215046 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qql6g"] Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.227207 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.227804 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.231080 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.231315 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.256206 4825 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.256265 4825 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.257518 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.268339 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:05 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:05 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:05 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.268383 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:05 crc kubenswrapper[4825]: W0122 15:26:05.288789 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44760e51_861e_4283_9593_8832b2d55847.slice/crio-5dac78a7a67c1029586e043370afd44aa79af2fbc54dfe42d8daf6822ed853d2 WatchSource:0}: Error finding container 5dac78a7a67c1029586e043370afd44aa79af2fbc54dfe42d8daf6822ed853d2: Status 404 returned error can't find the container with id 5dac78a7a67c1029586e043370afd44aa79af2fbc54dfe42d8daf6822ed853d2 Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.304295 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.310044 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.344177 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzxc9"] Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.359050 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ab473b-da77-446d-8f6e-660fb63abb6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.359140 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.359198 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ab473b-da77-446d-8f6e-660fb63abb6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.367234 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.367265 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:05 crc kubenswrapper[4825]: W0122 15:26:05.424291 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae2b938_4488_405a_bba8_4693edacafc8.slice/crio-40295479b37e5d7c09613b5c68f1140a74b8c641862df6a93cdd087c7ea2b8af WatchSource:0}: Error finding container 40295479b37e5d7c09613b5c68f1140a74b8c641862df6a93cdd087c7ea2b8af: Status 404 returned error can't find the container with id 40295479b37e5d7c09613b5c68f1140a74b8c641862df6a93cdd087c7ea2b8af Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.461086 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ab473b-da77-446d-8f6e-660fb63abb6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.461178 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ab473b-da77-446d-8f6e-660fb63abb6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.461241 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ab473b-da77-446d-8f6e-660fb63abb6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.507738 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ab473b-da77-446d-8f6e-660fb63abb6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.546164 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.546702 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r887w"] Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.555178 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9s9f\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.588713 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:05 crc kubenswrapper[4825]: I0122 15:26:05.731685 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.799652 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.800818 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.807575 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.808517 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.811225 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.881011 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qql6g" event={"ID":"fb2f5594-0942-47ab-be72-76cc52b73a6d","Type":"ContainerStarted","Data":"8603fd8382e55dfdfd279509be994542cdf26a6037ec1963c88e6cb4f17bf971"} Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.882455 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvfdl" event={"ID":"44760e51-861e-4283-9593-8832b2d55847","Type":"ContainerStarted","Data":"5dac78a7a67c1029586e043370afd44aa79af2fbc54dfe42d8daf6822ed853d2"} Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.891013 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r887w" event={"ID":"9d0229a3-015e-43bf-bcb3-32088de6e95c","Type":"ContainerStarted","Data":"3ac038a4d32c6617a7d439c1f35438f2805135201f9eaef31f38977d6dabc2e9"} Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.894828 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerStarted","Data":"40295479b37e5d7c09613b5c68f1140a74b8c641862df6a93cdd087c7ea2b8af"} Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.914803 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:05.914881 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.030053 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.030198 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.030533 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.062849 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.224303 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.238821 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:06 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:06 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:06 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.238881 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.798025 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q545b"] Jan 22 15:26:06 crc kubenswrapper[4825]: W0122 15:26:06.852110 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5076053_c905_48d9_b10a_c720e82c9cee.slice/crio-c0e8d1114e9578a59ac9d40667bc8074bf04a1ffe1b0df35dd41206ae0ae3b97 WatchSource:0}: Error finding container c0e8d1114e9578a59ac9d40667bc8074bf04a1ffe1b0df35dd41206ae0ae3b97: Status 404 returned error can't find the container with id c0e8d1114e9578a59ac9d40667bc8074bf04a1ffe1b0df35dd41206ae0ae3b97 Jan 22 15:26:06 crc kubenswrapper[4825]: I0122 15:26:06.855903 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.952051 4825 generic.go:334] "Generic (PLEG): container finished" podID="caae48a6-c8ee-4c56-91cc-fe8f4b21e313" containerID="0b3cf3720325cc7d91844eb51ef35e896132c208ab98b5ec8eb68cf404526e03" exitCode=0 Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.952109 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" event={"ID":"caae48a6-c8ee-4c56-91cc-fe8f4b21e313","Type":"ContainerDied","Data":"0b3cf3720325cc7d91844eb51ef35e896132c208ab98b5ec8eb68cf404526e03"} Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.958479 4825 generic.go:334] "Generic (PLEG): container finished" podID="44760e51-861e-4283-9593-8832b2d55847" containerID="d0ed022277d028a0d5b07756b32159f9c20f8d07f17e0c1da3af23062c3ea9b1" exitCode=0 Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.958535 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvfdl" event={"ID":"44760e51-861e-4283-9593-8832b2d55847","Type":"ContainerDied","Data":"d0ed022277d028a0d5b07756b32159f9c20f8d07f17e0c1da3af23062c3ea9b1"} Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.960447 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q545b" event={"ID":"b5076053-c905-48d9-b10a-c720e82c9cee","Type":"ContainerStarted","Data":"c0e8d1114e9578a59ac9d40667bc8074bf04a1ffe1b0df35dd41206ae0ae3b97"} Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.969894 4825 generic.go:334] "Generic (PLEG): container finished" podID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerID="3bbb894a0610dbc4a3f082274a17e142ddc9ee58096e09ed835f64cb6d070446" exitCode=0 Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.969963 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r887w" event={"ID":"9d0229a3-015e-43bf-bcb3-32088de6e95c","Type":"ContainerDied","Data":"3bbb894a0610dbc4a3f082274a17e142ddc9ee58096e09ed835f64cb6d070446"} Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.976350 4825 generic.go:334] "Generic (PLEG): container finished" podID="1ae2b938-4488-405a-bba8-4693edacafc8" containerID="ec9f1c769fa8cd54b27fdfb930f47caab08333676b05fcfa0ccc2c4db3599b94" exitCode=0 Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.976396 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerDied","Data":"ec9f1c769fa8cd54b27fdfb930f47caab08333676b05fcfa0ccc2c4db3599b94"} Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.980274 4825 generic.go:334] "Generic (PLEG): container finished" podID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerID="0581c484e8242e16672f8448dd2687c9c82573fb2f57cdf7073e2c3830fdcf6c" exitCode=0 Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:06.980317 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qql6g" event={"ID":"fb2f5594-0942-47ab-be72-76cc52b73a6d","Type":"ContainerDied","Data":"0581c484e8242e16672f8448dd2687c9c82573fb2f57cdf7073e2c3830fdcf6c"} Jan 22 15:26:07 crc kubenswrapper[4825]: W0122 15:26:07.052456 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14ab473b_da77_446d_8f6e_660fb63abb6a.slice/crio-7e5abd7b1dd5b594fa51437082a27bd800a8dac2309033d829ae3feda8497015 WatchSource:0}: Error finding container 7e5abd7b1dd5b594fa51437082a27bd800a8dac2309033d829ae3feda8497015: Status 404 returned error can't find the container with id 7e5abd7b1dd5b594fa51437082a27bd800a8dac2309033d829ae3feda8497015 Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:07.239330 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:07 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:07 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:07 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:07.239417 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:07.559428 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 15:26:07 crc kubenswrapper[4825]: I0122 15:26:07.588941 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9s9f"] Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.042154 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.042209 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.042707 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.042737 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.102545 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" event={"ID":"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351","Type":"ContainerStarted","Data":"22e09198b3f77ff1dcbf4aa4cf13e87d0504343ee0d69e759758a30da0539363"} Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.104917 4825 generic.go:334] "Generic (PLEG): container finished" podID="b5076053-c905-48d9-b10a-c720e82c9cee" containerID="fdc96b7fa9d4898de293842b61cbf959383cd70b5f5fdf6b9ef0fe788dae8fcb" exitCode=0 Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.105038 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q545b" event={"ID":"b5076053-c905-48d9-b10a-c720e82c9cee","Type":"ContainerDied","Data":"fdc96b7fa9d4898de293842b61cbf959383cd70b5f5fdf6b9ef0fe788dae8fcb"} Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.154649 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf","Type":"ContainerStarted","Data":"f3b2bbfc7907f336459afe45e7e6f72c3e04cdcd57bd3537f2de2611851c5fb2"} Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.235616 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14ab473b-da77-446d-8f6e-660fb63abb6a","Type":"ContainerStarted","Data":"7e5abd7b1dd5b594fa51437082a27bd800a8dac2309033d829ae3feda8497015"} Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.240598 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:08 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:08 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:08 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:08 crc kubenswrapper[4825]: I0122 15:26:08.240643 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.257662 4825 patch_prober.go:28] interesting pod/console-f9d7485db-qvds8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.258080 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qvds8" podUID="81d43c37-4152-47d0-be95-a390693902e9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.257859 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:09 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:09 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:09 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.258696 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.279177 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" event={"ID":"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351","Type":"ContainerStarted","Data":"632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421"} Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.279233 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.292407 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf","Type":"ContainerStarted","Data":"517273b3a00ffcf2c1ec8c9cc7d6425a4e86079903e6670af9cb09a0f84f3469"} Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.296323 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14ab473b-da77-446d-8f6e-660fb63abb6a","Type":"ContainerStarted","Data":"aa3aac6e960716a90098f461faec754f1a1e7fa71215fd3eb9f4ef1c1c4ced97"} Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.419848 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.419833282 podStartE2EDuration="4.419833282s" podCreationTimestamp="2026-01-22 15:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:26:09.417707631 +0000 UTC m=+116.179234541" watchObservedRunningTime="2026-01-22 15:26:09.419833282 +0000 UTC m=+116.181360192" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.420376 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" podStartSLOduration=92.420369877 podStartE2EDuration="1m32.420369877s" podCreationTimestamp="2026-01-22 15:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:26:09.310437355 +0000 UTC m=+116.071964265" watchObservedRunningTime="2026-01-22 15:26:09.420369877 +0000 UTC m=+116.181896787" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.555845 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.573858 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.573839464 podStartE2EDuration="4.573839464s" podCreationTimestamp="2026-01-22 15:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:26:09.442219161 +0000 UTC m=+116.203746081" watchObservedRunningTime="2026-01-22 15:26:09.573839464 +0000 UTC m=+116.335366374" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.725684 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-config-volume\") pod \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.725725 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-secret-volume\") pod \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.725748 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8xds\" (UniqueName: \"kubernetes.io/projected/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-kube-api-access-s8xds\") pod \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\" (UID: \"caae48a6-c8ee-4c56-91cc-fe8f4b21e313\") " Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.731006 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-kube-api-access-s8xds" (OuterVolumeSpecName: "kube-api-access-s8xds") pod "caae48a6-c8ee-4c56-91cc-fe8f4b21e313" (UID: "caae48a6-c8ee-4c56-91cc-fe8f4b21e313"). InnerVolumeSpecName "kube-api-access-s8xds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.732090 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-config-volume" (OuterVolumeSpecName: "config-volume") pod "caae48a6-c8ee-4c56-91cc-fe8f4b21e313" (UID: "caae48a6-c8ee-4c56-91cc-fe8f4b21e313"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.737005 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "caae48a6-c8ee-4c56-91cc-fe8f4b21e313" (UID: "caae48a6-c8ee-4c56-91cc-fe8f4b21e313"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.830775 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.830807 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:09 crc kubenswrapper[4825]: I0122 15:26:09.830819 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8xds\" (UniqueName: \"kubernetes.io/projected/caae48a6-c8ee-4c56-91cc-fe8f4b21e313-kube-api-access-s8xds\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.246708 4825 patch_prober.go:28] interesting pod/router-default-5444994796-v5ljt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 15:26:10 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Jan 22 15:26:10 crc kubenswrapper[4825]: [+]process-running ok Jan 22 15:26:10 crc kubenswrapper[4825]: healthz check failed Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.246785 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v5ljt" podUID="bda22efd-beea-406e-a9d8-cb04fac11b9c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.326185 4825 generic.go:334] "Generic (PLEG): container finished" podID="7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf" containerID="517273b3a00ffcf2c1ec8c9cc7d6425a4e86079903e6670af9cb09a0f84f3469" exitCode=0 Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.326336 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf","Type":"ContainerDied","Data":"517273b3a00ffcf2c1ec8c9cc7d6425a4e86079903e6670af9cb09a0f84f3469"} Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.330462 4825 generic.go:334] "Generic (PLEG): container finished" podID="14ab473b-da77-446d-8f6e-660fb63abb6a" containerID="aa3aac6e960716a90098f461faec754f1a1e7fa71215fd3eb9f4ef1c1c4ced97" exitCode=0 Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.330543 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14ab473b-da77-446d-8f6e-660fb63abb6a","Type":"ContainerDied","Data":"aa3aac6e960716a90098f461faec754f1a1e7fa71215fd3eb9f4ef1c1c4ced97"} Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.338960 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.339875 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz" event={"ID":"caae48a6-c8ee-4c56-91cc-fe8f4b21e313","Type":"ContainerDied","Data":"2ef9724cda4e706129e33b400e248ebe7544f13a0d1082810b7e9b7e1a537434"} Jan 22 15:26:10 crc kubenswrapper[4825]: I0122 15:26:10.339956 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef9724cda4e706129e33b400e248ebe7544f13a0d1082810b7e9b7e1a537434" Jan 22 15:26:11 crc kubenswrapper[4825]: I0122 15:26:11.246395 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:26:11 crc kubenswrapper[4825]: I0122 15:26:11.249928 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-v5ljt" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.037064 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.177074 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kubelet-dir\") pod \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.177169 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kube-api-access\") pod \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\" (UID: \"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf\") " Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.177173 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf" (UID: "7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.177561 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.208314 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf" (UID: "7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.252700 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.279167 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.368393 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf","Type":"ContainerDied","Data":"f3b2bbfc7907f336459afe45e7e6f72c3e04cdcd57bd3537f2de2611851c5fb2"} Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.368759 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b2bbfc7907f336459afe45e7e6f72c3e04cdcd57bd3537f2de2611851c5fb2" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.368837 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.373141 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14ab473b-da77-446d-8f6e-660fb63abb6a","Type":"ContainerDied","Data":"7e5abd7b1dd5b594fa51437082a27bd800a8dac2309033d829ae3feda8497015"} Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.373178 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5abd7b1dd5b594fa51437082a27bd800a8dac2309033d829ae3feda8497015" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.373234 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.380021 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ab473b-da77-446d-8f6e-660fb63abb6a-kube-api-access\") pod \"14ab473b-da77-446d-8f6e-660fb63abb6a\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.380113 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ab473b-da77-446d-8f6e-660fb63abb6a-kubelet-dir\") pod \"14ab473b-da77-446d-8f6e-660fb63abb6a\" (UID: \"14ab473b-da77-446d-8f6e-660fb63abb6a\") " Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.380214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14ab473b-da77-446d-8f6e-660fb63abb6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14ab473b-da77-446d-8f6e-660fb63abb6a" (UID: "14ab473b-da77-446d-8f6e-660fb63abb6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.380490 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ab473b-da77-446d-8f6e-660fb63abb6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.390960 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ab473b-da77-446d-8f6e-660fb63abb6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14ab473b-da77-446d-8f6e-660fb63abb6a" (UID: "14ab473b-da77-446d-8f6e-660fb63abb6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:26:12 crc kubenswrapper[4825]: I0122 15:26:12.481841 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ab473b-da77-446d-8f6e-660fb63abb6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.067859 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.068316 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.068374 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.072043 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"dfc9d849f6655eb296b71399fab0466803e0dc50859ce2b8ce7b230a4fbc2227"} pod="openshift-console/downloads-7954f5f757-f7rxw" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.072126 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" containerID="cri-o://dfc9d849f6655eb296b71399fab0466803e0dc50859ce2b8ce7b230a4fbc2227" gracePeriod=2 Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.072344 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.072365 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.072679 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:18 crc kubenswrapper[4825]: I0122 15:26:18.072704 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:19 crc kubenswrapper[4825]: I0122 15:26:19.279563 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:26:19 crc kubenswrapper[4825]: I0122 15:26:19.284816 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:26:19 crc kubenswrapper[4825]: I0122 15:26:19.706476 4825 generic.go:334] "Generic (PLEG): container finished" podID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerID="dfc9d849f6655eb296b71399fab0466803e0dc50859ce2b8ce7b230a4fbc2227" exitCode=0 Jan 22 15:26:19 crc kubenswrapper[4825]: I0122 15:26:19.706548 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7rxw" event={"ID":"bb8c16fb-b627-4b4d-8c02-5f9537eea746","Type":"ContainerDied","Data":"dfc9d849f6655eb296b71399fab0466803e0dc50859ce2b8ce7b230a4fbc2227"} Jan 22 15:26:25 crc kubenswrapper[4825]: I0122 15:26:25.739220 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:26:28 crc kubenswrapper[4825]: I0122 15:26:28.038225 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:28 crc kubenswrapper[4825]: I0122 15:26:28.038286 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:30 crc kubenswrapper[4825]: I0122 15:26:30.007390 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-89hpb" Jan 22 15:26:38 crc kubenswrapper[4825]: I0122 15:26:38.038312 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:38 crc kubenswrapper[4825]: I0122 15:26:38.038667 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.590778 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.591094 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.591503 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.591526 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.593753 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.593792 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.593864 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.602870 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.609267 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.614776 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.615450 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.642475 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.777401 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.835313 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 15:26:39 crc kubenswrapper[4825]: I0122 15:26:39.905017 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.388862 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 15:26:40 crc kubenswrapper[4825]: E0122 15:26:40.389114 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ab473b-da77-446d-8f6e-660fb63abb6a" containerName="pruner" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389129 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ab473b-da77-446d-8f6e-660fb63abb6a" containerName="pruner" Jan 22 15:26:40 crc kubenswrapper[4825]: E0122 15:26:40.389156 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caae48a6-c8ee-4c56-91cc-fe8f4b21e313" containerName="collect-profiles" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389164 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="caae48a6-c8ee-4c56-91cc-fe8f4b21e313" containerName="collect-profiles" Jan 22 15:26:40 crc kubenswrapper[4825]: E0122 15:26:40.389173 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf" containerName="pruner" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389181 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf" containerName="pruner" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389280 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="caae48a6-c8ee-4c56-91cc-fe8f4b21e313" containerName="collect-profiles" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389300 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9a5c21-04c4-43f9-ac0d-07ac3bf8fdaf" containerName="pruner" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389312 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ab473b-da77-446d-8f6e-660fb63abb6a" containerName="pruner" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.389743 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.395233 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.395277 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.401060 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.401110 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.404308 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.501912 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.501966 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.502089 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.557554 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:40 crc kubenswrapper[4825]: I0122 15:26:40.724380 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:26:45 crc kubenswrapper[4825]: I0122 15:26:45.993263 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 15:26:45 crc kubenswrapper[4825]: I0122 15:26:45.994665 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:45 crc kubenswrapper[4825]: I0122 15:26:45.997606 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.127511 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c66dbaa-8b04-4f97-be82-717510c14a1c-kube-api-access\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.127930 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.127961 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-var-lock\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.229251 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c66dbaa-8b04-4f97-be82-717510c14a1c-kube-api-access\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.229328 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.229363 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-var-lock\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.229480 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-var-lock\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.229528 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.251916 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c66dbaa-8b04-4f97-be82-717510c14a1c-kube-api-access\") pod \"installer-9-crc\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:46 crc kubenswrapper[4825]: I0122 15:26:46.367362 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:26:48 crc kubenswrapper[4825]: I0122 15:26:48.054677 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:48 crc kubenswrapper[4825]: I0122 15:26:48.054735 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:48 crc kubenswrapper[4825]: E0122 15:26:48.947561 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 15:26:48 crc kubenswrapper[4825]: E0122 15:26:48.948046 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xb9l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ktlrw_openshift-marketplace(43675eea-b514-472a-9f19-d93ec4ddf044): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 15:26:48 crc kubenswrapper[4825]: E0122 15:26:48.949701 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ktlrw" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" Jan 22 15:26:50 crc kubenswrapper[4825]: E0122 15:26:50.183582 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ktlrw" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" Jan 22 15:26:50 crc kubenswrapper[4825]: E0122 15:26:50.255940 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 15:26:50 crc kubenswrapper[4825]: E0122 15:26:50.256182 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szwxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wnr8n_openshift-marketplace(c7b87096-547e-442b-9701-4e6a222ce547): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 15:26:50 crc kubenswrapper[4825]: E0122 15:26:50.257413 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wnr8n" podUID="c7b87096-547e-442b-9701-4e6a222ce547" Jan 22 15:26:55 crc kubenswrapper[4825]: E0122 15:26:55.390389 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wnr8n" podUID="c7b87096-547e-442b-9701-4e6a222ce547" Jan 22 15:26:55 crc kubenswrapper[4825]: E0122 15:26:55.516105 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 15:26:55 crc kubenswrapper[4825]: E0122 15:26:55.516286 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llxd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r887w_openshift-marketplace(9d0229a3-015e-43bf-bcb3-32088de6e95c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 15:26:55 crc kubenswrapper[4825]: E0122 15:26:55.517712 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r887w" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.624580 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r887w" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.829591 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.829726 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z826h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pvfdl_openshift-marketplace(44760e51-861e-4283-9593-8832b2d55847): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.830954 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pvfdl" podUID="44760e51-861e-4283-9593-8832b2d55847" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.858152 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.858316 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59d6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qql6g_openshift-marketplace(fb2f5594-0942-47ab-be72-76cc52b73a6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 15:26:57 crc kubenswrapper[4825]: E0122 15:26:57.859374 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qql6g" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" Jan 22 15:26:58 crc kubenswrapper[4825]: E0122 15:26:57.897893 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 15:26:58 crc kubenswrapper[4825]: E0122 15:26:57.898361 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pd6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-q545b_openshift-marketplace(b5076053-c905-48d9-b10a-c720e82c9cee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 15:26:58 crc kubenswrapper[4825]: E0122 15:26:57.900659 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-q545b" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.037139 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.037202 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.301923 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerStarted","Data":"f90ba9cf51d674c733922dbbc1bcdd710382992f6dfc7320415a1d30a967c0ca"} Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.307723 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7rxw" event={"ID":"bb8c16fb-b627-4b4d-8c02-5f9537eea746","Type":"ContainerStarted","Data":"9d08f75ad9cf9e274a1e15d871917adbb121d41816918238f2cf87c583b0983f"} Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.308697 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.308770 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.308807 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.311225 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerStarted","Data":"f2f7fb533441e31155fc3041f5725dd98fd4782f8057ec6dc82a1f789323dddc"} Jan 22 15:26:58 crc kubenswrapper[4825]: E0122 15:26:58.313185 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qql6g" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" Jan 22 15:26:58 crc kubenswrapper[4825]: E0122 15:26:58.313265 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-q545b" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" Jan 22 15:26:58 crc kubenswrapper[4825]: E0122 15:26:58.315168 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pvfdl" podUID="44760e51-861e-4283-9593-8832b2d55847" Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.757137 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 15:26:58 crc kubenswrapper[4825]: I0122 15:26:58.768567 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 15:26:58 crc kubenswrapper[4825]: W0122 15:26:58.804801 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-7c5507e6705fcebdd420b4556db0831d341397917d2020a45ef5b8d3ec49d3cc WatchSource:0}: Error finding container 7c5507e6705fcebdd420b4556db0831d341397917d2020a45ef5b8d3ec49d3cc: Status 404 returned error can't find the container with id 7c5507e6705fcebdd420b4556db0831d341397917d2020a45ef5b8d3ec49d3cc Jan 22 15:27:00 crc kubenswrapper[4825]: W0122 15:27:00.045816 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b17d5faf12c500d3c608881755624c582be4b69f48b4a491276077c4f3e78005 WatchSource:0}: Error finding container b17d5faf12c500d3c608881755624c582be4b69f48b4a491276077c4f3e78005: Status 404 returned error can't find the container with id b17d5faf12c500d3c608881755624c582be4b69f48b4a491276077c4f3e78005 Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.109811 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7c5507e6705fcebdd420b4556db0831d341397917d2020a45ef5b8d3ec49d3cc"} Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.119550 4825 generic.go:334] "Generic (PLEG): container finished" podID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerID="f2f7fb533441e31155fc3041f5725dd98fd4782f8057ec6dc82a1f789323dddc" exitCode=0 Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.119622 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerDied","Data":"f2f7fb533441e31155fc3041f5725dd98fd4782f8057ec6dc82a1f789323dddc"} Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.393406 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"228141d98a05308308e46cd282cd91c1d8ab474a449cba1cc02a5809e0e32ce0"} Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.416527 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c66dbaa-8b04-4f97-be82-717510c14a1c","Type":"ContainerStarted","Data":"844e4742378f7b3744b7c18862e15c4118bd329bd10af18fec2a0127b02ab003"} Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.435457 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4dd9b99-c7b9-411d-8f6c-35ca0db49406","Type":"ContainerStarted","Data":"e44e579562580964c0b676733d9c427631fdaa31a13b9116d3d4e1ba30eab7c9"} Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.440162 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:27:00 crc kubenswrapper[4825]: I0122 15:27:00.440232 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.637412 4825 generic.go:334] "Generic (PLEG): container finished" podID="d4dd9b99-c7b9-411d-8f6c-35ca0db49406" containerID="a6e83ab5f282b658221c0b3d95772562b368c3f6b5599b0335d9d8c58a71d31d" exitCode=0 Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.637471 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4dd9b99-c7b9-411d-8f6c-35ca0db49406","Type":"ContainerDied","Data":"a6e83ab5f282b658221c0b3d95772562b368c3f6b5599b0335d9d8c58a71d31d"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.640808 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a3fa795ee8c9b65d6ea1f5a36914dd955e3a37b2d6bb27671f9d115f6a8ca270"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.641127 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b17d5faf12c500d3c608881755624c582be4b69f48b4a491276077c4f3e78005"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.646633 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f38a58e9a5e6faaeb06128ba7eb6299059b92122dc10ee0c05d4590bd4bcbd04"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.647196 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.666339 4825 generic.go:334] "Generic (PLEG): container finished" podID="1ae2b938-4488-405a-bba8-4693edacafc8" containerID="f90ba9cf51d674c733922dbbc1bcdd710382992f6dfc7320415a1d30a967c0ca" exitCode=0 Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.666437 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerDied","Data":"f90ba9cf51d674c733922dbbc1bcdd710382992f6dfc7320415a1d30a967c0ca"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.668911 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerStarted","Data":"d03a30adc290233b767a819884fe622e42d1c9ec30410399eaad7e8502c0a1ca"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.688848 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0290e0199b3f420d6ac7218cf87dd76b1088f18c66e21d41c54ba5799c445d44"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.704307 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c66dbaa-8b04-4f97-be82-717510c14a1c","Type":"ContainerStarted","Data":"27dd41f0de4000ea2769ce5f766e7e7f07b4bd2cbf69b107f31c41323ef7e7bd"} Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.704610 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.704674 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:27:01 crc kubenswrapper[4825]: I0122 15:27:01.797329 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sw69k" podStartSLOduration=4.54373439 podStartE2EDuration="1m1.797309413s" podCreationTimestamp="2026-01-22 15:26:00 +0000 UTC" firstStartedPulling="2026-01-22 15:26:03.621552358 +0000 UTC m=+110.383079268" lastFinishedPulling="2026-01-22 15:27:00.875127381 +0000 UTC m=+167.636654291" observedRunningTime="2026-01-22 15:27:01.792966928 +0000 UTC m=+168.554493858" watchObservedRunningTime="2026-01-22 15:27:01.797309413 +0000 UTC m=+168.558836323" Jan 22 15:27:03 crc kubenswrapper[4825]: I0122 15:27:03.278604 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:03.643191 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kubelet-dir\") pod \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:03.643274 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kube-api-access\") pod \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\" (UID: \"d4dd9b99-c7b9-411d-8f6c-35ca0db49406\") " Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:03.644676 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d4dd9b99-c7b9-411d-8f6c-35ca0db49406" (UID: "d4dd9b99-c7b9-411d-8f6c-35ca0db49406"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.071497 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.079925 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=19.079900874 podStartE2EDuration="19.079900874s" podCreationTimestamp="2026-01-22 15:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:27:01.832743026 +0000 UTC m=+168.594269946" watchObservedRunningTime="2026-01-22 15:27:04.079900874 +0000 UTC m=+170.841427784" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.106332 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.106799 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d4dd9b99-c7b9-411d-8f6c-35ca0db49406" (UID: "d4dd9b99-c7b9-411d-8f6c-35ca0db49406"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.106803 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d4dd9b99-c7b9-411d-8f6c-35ca0db49406","Type":"ContainerDied","Data":"e44e579562580964c0b676733d9c427631fdaa31a13b9116d3d4e1ba30eab7c9"} Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.107107 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44e579562580964c0b676733d9c427631fdaa31a13b9116d3d4e1ba30eab7c9" Jan 22 15:27:04 crc kubenswrapper[4825]: I0122 15:27:04.173565 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4dd9b99-c7b9-411d-8f6c-35ca0db49406-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:05 crc kubenswrapper[4825]: I0122 15:27:05.972925 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:27:05 crc kubenswrapper[4825]: I0122 15:27:05.973279 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:27:08 crc kubenswrapper[4825]: I0122 15:27:08.173486 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:27:08 crc kubenswrapper[4825]: I0122 15:27:08.173803 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:27:08 crc kubenswrapper[4825]: I0122 15:27:08.181613 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:27:08 crc kubenswrapper[4825]: I0122 15:27:08.181671 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:27:11 crc kubenswrapper[4825]: I0122 15:27:11.505914 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64d9l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 15:27:11 crc kubenswrapper[4825]: I0122 15:27:11.505991 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" podUID="1132329f-90a4-4bcf-a303-28ec140c7c3f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 15:27:11 crc kubenswrapper[4825]: I0122 15:27:11.508919 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:27:11 crc kubenswrapper[4825]: I0122 15:27:11.511352 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64d9l container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 15:27:11 crc kubenswrapper[4825]: I0122 15:27:11.511388 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64d9l" podUID="1132329f-90a4-4bcf-a303-28ec140c7c3f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 15:27:11 crc kubenswrapper[4825]: I0122 15:27:11.512383 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:27:12 crc kubenswrapper[4825]: I0122 15:27:12.002142 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:27:12 crc kubenswrapper[4825]: I0122 15:27:12.651652 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:27:14 crc kubenswrapper[4825]: I0122 15:27:14.539355 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerStarted","Data":"7a4528eba4a368478593d080b978afa9b2ddfb02bfa0b14eda8b7c32f37386eb"} Jan 22 15:27:14 crc kubenswrapper[4825]: I0122 15:27:14.557516 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zzxc9" podStartSLOduration=8.529497285 podStartE2EDuration="1m13.55750198s" podCreationTimestamp="2026-01-22 15:26:01 +0000 UTC" firstStartedPulling="2026-01-22 15:26:06.977688213 +0000 UTC m=+113.739215123" lastFinishedPulling="2026-01-22 15:27:12.005692868 +0000 UTC m=+178.767219818" observedRunningTime="2026-01-22 15:27:14.556920432 +0000 UTC m=+181.318447362" watchObservedRunningTime="2026-01-22 15:27:14.55750198 +0000 UTC m=+181.319028890" Jan 22 15:27:16 crc kubenswrapper[4825]: I0122 15:27:16.584243 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerStarted","Data":"6ccd2f81d9656773367f0f8426f9d9389fafbd24842760da4029b97b3f06c4be"} Jan 22 15:27:18 crc kubenswrapper[4825]: I0122 15:27:18.038064 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:27:18 crc kubenswrapper[4825]: I0122 15:27:18.038167 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7rxw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 22 15:27:18 crc kubenswrapper[4825]: I0122 15:27:18.038393 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:27:18 crc kubenswrapper[4825]: I0122 15:27:18.038441 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7rxw" podUID="bb8c16fb-b627-4b4d-8c02-5f9537eea746" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 22 15:27:18 crc kubenswrapper[4825]: I0122 15:27:18.618230 4825 generic.go:334] "Generic (PLEG): container finished" podID="43675eea-b514-472a-9f19-d93ec4ddf044" containerID="6ccd2f81d9656773367f0f8426f9d9389fafbd24842760da4029b97b3f06c4be" exitCode=0 Jan 22 15:27:18 crc kubenswrapper[4825]: I0122 15:27:18.618272 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerDied","Data":"6ccd2f81d9656773367f0f8426f9d9389fafbd24842760da4029b97b3f06c4be"} Jan 22 15:27:22 crc kubenswrapper[4825]: I0122 15:27:22.719610 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:27:22 crc kubenswrapper[4825]: I0122 15:27:22.719949 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:27:22 crc kubenswrapper[4825]: I0122 15:27:22.767952 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:27:23 crc kubenswrapper[4825]: I0122 15:27:23.764165 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:27:23 crc kubenswrapper[4825]: I0122 15:27:23.800224 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzxc9"] Jan 22 15:27:25 crc kubenswrapper[4825]: I0122 15:27:25.770231 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zzxc9" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="registry-server" containerID="cri-o://7a4528eba4a368478593d080b978afa9b2ddfb02bfa0b14eda8b7c32f37386eb" gracePeriod=2 Jan 22 15:27:26 crc kubenswrapper[4825]: I0122 15:27:26.779487 4825 generic.go:334] "Generic (PLEG): container finished" podID="1ae2b938-4488-405a-bba8-4693edacafc8" containerID="7a4528eba4a368478593d080b978afa9b2ddfb02bfa0b14eda8b7c32f37386eb" exitCode=0 Jan 22 15:27:26 crc kubenswrapper[4825]: I0122 15:27:26.779539 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerDied","Data":"7a4528eba4a368478593d080b978afa9b2ddfb02bfa0b14eda8b7c32f37386eb"} Jan 22 15:27:28 crc kubenswrapper[4825]: I0122 15:27:28.070945 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-f7rxw" Jan 22 15:27:28 crc kubenswrapper[4825]: I0122 15:27:28.993314 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.054941 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-catalog-content\") pod \"1ae2b938-4488-405a-bba8-4693edacafc8\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.055014 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk5sb\" (UniqueName: \"kubernetes.io/projected/1ae2b938-4488-405a-bba8-4693edacafc8-kube-api-access-bk5sb\") pod \"1ae2b938-4488-405a-bba8-4693edacafc8\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.055106 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-utilities\") pod \"1ae2b938-4488-405a-bba8-4693edacafc8\" (UID: \"1ae2b938-4488-405a-bba8-4693edacafc8\") " Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.056275 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-utilities" (OuterVolumeSpecName: "utilities") pod "1ae2b938-4488-405a-bba8-4693edacafc8" (UID: "1ae2b938-4488-405a-bba8-4693edacafc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.064487 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae2b938-4488-405a-bba8-4693edacafc8-kube-api-access-bk5sb" (OuterVolumeSpecName: "kube-api-access-bk5sb") pod "1ae2b938-4488-405a-bba8-4693edacafc8" (UID: "1ae2b938-4488-405a-bba8-4693edacafc8"). InnerVolumeSpecName "kube-api-access-bk5sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.466849 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk5sb\" (UniqueName: \"kubernetes.io/projected/1ae2b938-4488-405a-bba8-4693edacafc8-kube-api-access-bk5sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.466903 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.622926 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ae2b938-4488-405a-bba8-4693edacafc8" (UID: "1ae2b938-4488-405a-bba8-4693edacafc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.673840 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae2b938-4488-405a-bba8-4693edacafc8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.800315 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzxc9" event={"ID":"1ae2b938-4488-405a-bba8-4693edacafc8","Type":"ContainerDied","Data":"40295479b37e5d7c09613b5c68f1140a74b8c641862df6a93cdd087c7ea2b8af"} Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.800383 4825 scope.go:117] "RemoveContainer" containerID="7a4528eba4a368478593d080b978afa9b2ddfb02bfa0b14eda8b7c32f37386eb" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.800385 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzxc9" Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.831066 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzxc9"] Jan 22 15:27:29 crc kubenswrapper[4825]: I0122 15:27:29.836071 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zzxc9"] Jan 22 15:27:31 crc kubenswrapper[4825]: I0122 15:27:31.325973 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f22rt"] Jan 22 15:27:31 crc kubenswrapper[4825]: I0122 15:27:31.525495 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" path="/var/lib/kubelet/pods/1ae2b938-4488-405a-bba8-4693edacafc8/volumes" Jan 22 15:27:31 crc kubenswrapper[4825]: I0122 15:27:31.850481 4825 scope.go:117] "RemoveContainer" containerID="f90ba9cf51d674c733922dbbc1bcdd710382992f6dfc7320415a1d30a967c0ca" Jan 22 15:27:33 crc kubenswrapper[4825]: I0122 15:27:33.610111 4825 scope.go:117] "RemoveContainer" containerID="ec9f1c769fa8cd54b27fdfb930f47caab08333676b05fcfa0ccc2c4db3599b94" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.519450 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktlrw"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.575023 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw69k"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.576085 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sw69k" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="registry-server" containerID="cri-o://d03a30adc290233b767a819884fe622e42d1c9ec30410399eaad7e8502c0a1ca" gracePeriod=30 Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.579801 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnr8n"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.585402 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kv4vj"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.585639 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" containerID="cri-o://78995e6260e6066a0e3a09656206ae1c0e4a7cffdcdf6ee0c7f8b4b74361b63f" gracePeriod=30 Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.589319 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvfdl"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.591408 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qql6g"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.599159 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q545b"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.599791 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r887w"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.602635 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k98tl"] Jan 22 15:27:34 crc kubenswrapper[4825]: E0122 15:27:34.602926 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dd9b99-c7b9-411d-8f6c-35ca0db49406" containerName="pruner" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.602943 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dd9b99-c7b9-411d-8f6c-35ca0db49406" containerName="pruner" Jan 22 15:27:34 crc kubenswrapper[4825]: E0122 15:27:34.602966 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="extract-utilities" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.602987 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="extract-utilities" Jan 22 15:27:34 crc kubenswrapper[4825]: E0122 15:27:34.602999 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="registry-server" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.603006 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="registry-server" Jan 22 15:27:34 crc kubenswrapper[4825]: E0122 15:27:34.603019 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="extract-content" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.603026 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="extract-content" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.603136 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dd9b99-c7b9-411d-8f6c-35ca0db49406" containerName="pruner" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.603151 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae2b938-4488-405a-bba8-4693edacafc8" containerName="registry-server" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.603598 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.605163 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k98tl"] Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.672845 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.672890 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7hc\" (UniqueName: \"kubernetes.io/projected/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-kube-api-access-hj7hc\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.672933 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.938564 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.938609 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7hc\" (UniqueName: \"kubernetes.io/projected/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-kube-api-access-hj7hc\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.938662 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:34 crc kubenswrapper[4825]: I0122 15:27:34.940470 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:34.967025 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvfdl" event={"ID":"44760e51-861e-4283-9593-8832b2d55847","Type":"ContainerStarted","Data":"383fd915197ab9120194789429ad6a5951c87c32ed86056d37f73983def17fde"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:34.967242 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvfdl" podUID="44760e51-861e-4283-9593-8832b2d55847" containerName="extract-content" containerID="cri-o://383fd915197ab9120194789429ad6a5951c87c32ed86056d37f73983def17fde" gracePeriod=30 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:34.983877 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r887w" event={"ID":"9d0229a3-015e-43bf-bcb3-32088de6e95c","Type":"ContainerStarted","Data":"483d6141f6ffd8462d8b1566459c3b5d8092cc381d32aeec49dfbb35f988fca5"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.002323 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r887w" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerName="extract-content" containerID="cri-o://483d6141f6ffd8462d8b1566459c3b5d8092cc381d32aeec49dfbb35f988fca5" gracePeriod=30 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.002817 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7hc\" (UniqueName: \"kubernetes.io/projected/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-kube-api-access-hj7hc\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.014938 4825 generic.go:334] "Generic (PLEG): container finished" podID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerID="d03a30adc290233b767a819884fe622e42d1c9ec30410399eaad7e8502c0a1ca" exitCode=0 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.015078 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerDied","Data":"d03a30adc290233b767a819884fe622e42d1c9ec30410399eaad7e8502c0a1ca"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.017246 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qql6g" event={"ID":"fb2f5594-0942-47ab-be72-76cc52b73a6d","Type":"ContainerStarted","Data":"518b5686c119c19727845e6c7bf2de905e9dec9b537a7a02c478547da7449533"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.017398 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qql6g" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerName="extract-content" containerID="cri-o://518b5686c119c19727845e6c7bf2de905e9dec9b537a7a02c478547da7449533" gracePeriod=30 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.025318 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnr8n" event={"ID":"c7b87096-547e-442b-9701-4e6a222ce547","Type":"ContainerStarted","Data":"3ec609eb2dc9c093acded42d4fe4b76a13cc4f245a98175cb04fb09a960113d2"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.025607 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnr8n" podUID="c7b87096-547e-442b-9701-4e6a222ce547" containerName="extract-content" containerID="cri-o://3ec609eb2dc9c093acded42d4fe4b76a13cc4f245a98175cb04fb09a960113d2" gracePeriod=30 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.037385 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerStarted","Data":"688c39fa9bc4ad063dd8542765955eac1790ed1c22680b7ff340285fe8a7ff7a"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.037537 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ktlrw" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="registry-server" containerID="cri-o://688c39fa9bc4ad063dd8542765955eac1790ed1c22680b7ff340285fe8a7ff7a" gracePeriod=30 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.056320 4825 generic.go:334] "Generic (PLEG): container finished" podID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerID="78995e6260e6066a0e3a09656206ae1c0e4a7cffdcdf6ee0c7f8b4b74361b63f" exitCode=0 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.056416 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" event={"ID":"4d7e321a-a057-40e4-9826-4d9b8b46b30a","Type":"ContainerDied","Data":"78995e6260e6066a0e3a09656206ae1c0e4a7cffdcdf6ee0c7f8b4b74361b63f"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.058767 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q545b" event={"ID":"b5076053-c905-48d9-b10a-c720e82c9cee","Type":"ContainerStarted","Data":"24a9d85b0ced2d74bedfb415ef0f49e2e1d9bd66642a1f955b8c08245da99af8"} Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.059049 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q545b" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" containerName="extract-content" containerID="cri-o://24a9d85b0ced2d74bedfb415ef0f49e2e1d9bd66642a1f955b8c08245da99af8" gracePeriod=30 Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.086946 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3c53d3cf-ed7c-4579-a577-9e19ffb5d58e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k98tl\" (UID: \"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e\") " pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.175745 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ktlrw" podStartSLOduration=3.61577099 podStartE2EDuration="1m34.175731545s" podCreationTimestamp="2026-01-22 15:26:01 +0000 UTC" firstStartedPulling="2026-01-22 15:26:03.620797966 +0000 UTC m=+110.382324876" lastFinishedPulling="2026-01-22 15:27:34.180758521 +0000 UTC m=+200.942285431" observedRunningTime="2026-01-22 15:27:35.174744902 +0000 UTC m=+201.936271812" watchObservedRunningTime="2026-01-22 15:27:35.175731545 +0000 UTC m=+201.937258455" Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.265437 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.566088 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:27:35 crc kubenswrapper[4825]: I0122 15:27:35.566137 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.079358 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qql6g_fb2f5594-0942-47ab-be72-76cc52b73a6d/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.079661 4825 generic.go:334] "Generic (PLEG): container finished" podID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerID="518b5686c119c19727845e6c7bf2de905e9dec9b537a7a02c478547da7449533" exitCode=2 Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.079723 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qql6g" event={"ID":"fb2f5594-0942-47ab-be72-76cc52b73a6d","Type":"ContainerDied","Data":"518b5686c119c19727845e6c7bf2de905e9dec9b537a7a02c478547da7449533"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.081037 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wnr8n_c7b87096-547e-442b-9701-4e6a222ce547/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.081787 4825 generic.go:334] "Generic (PLEG): container finished" podID="c7b87096-547e-442b-9701-4e6a222ce547" containerID="3ec609eb2dc9c093acded42d4fe4b76a13cc4f245a98175cb04fb09a960113d2" exitCode=2 Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.081856 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnr8n" event={"ID":"c7b87096-547e-442b-9701-4e6a222ce547","Type":"ContainerDied","Data":"3ec609eb2dc9c093acded42d4fe4b76a13cc4f245a98175cb04fb09a960113d2"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.083443 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" event={"ID":"4d7e321a-a057-40e4-9826-4d9b8b46b30a","Type":"ContainerDied","Data":"2b179fb5335e5aea1c92abc9d89de840f96d33c35976540339ba211d0e83b4e8"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.083474 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b179fb5335e5aea1c92abc9d89de840f96d33c35976540339ba211d0e83b4e8" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.086017 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ktlrw_43675eea-b514-472a-9f19-d93ec4ddf044/registry-server/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.086952 4825 generic.go:334] "Generic (PLEG): container finished" podID="43675eea-b514-472a-9f19-d93ec4ddf044" containerID="688c39fa9bc4ad063dd8542765955eac1790ed1c22680b7ff340285fe8a7ff7a" exitCode=1 Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.087046 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerDied","Data":"688c39fa9bc4ad063dd8542765955eac1790ed1c22680b7ff340285fe8a7ff7a"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.088282 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pvfdl_44760e51-861e-4283-9593-8832b2d55847/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.088587 4825 generic.go:334] "Generic (PLEG): container finished" podID="44760e51-861e-4283-9593-8832b2d55847" containerID="383fd915197ab9120194789429ad6a5951c87c32ed86056d37f73983def17fde" exitCode=2 Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.088641 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvfdl" event={"ID":"44760e51-861e-4283-9593-8832b2d55847","Type":"ContainerDied","Data":"383fd915197ab9120194789429ad6a5951c87c32ed86056d37f73983def17fde"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.089646 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q545b_b5076053-c905-48d9-b10a-c720e82c9cee/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.089966 4825 generic.go:334] "Generic (PLEG): container finished" podID="b5076053-c905-48d9-b10a-c720e82c9cee" containerID="24a9d85b0ced2d74bedfb415ef0f49e2e1d9bd66642a1f955b8c08245da99af8" exitCode=2 Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.090055 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q545b" event={"ID":"b5076053-c905-48d9-b10a-c720e82c9cee","Type":"ContainerDied","Data":"24a9d85b0ced2d74bedfb415ef0f49e2e1d9bd66642a1f955b8c08245da99af8"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.091643 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r887w_9d0229a3-015e-43bf-bcb3-32088de6e95c/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.092118 4825 generic.go:334] "Generic (PLEG): container finished" podID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerID="483d6141f6ffd8462d8b1566459c3b5d8092cc381d32aeec49dfbb35f988fca5" exitCode=2 Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.092218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r887w" event={"ID":"9d0229a3-015e-43bf-bcb3-32088de6e95c","Type":"ContainerDied","Data":"483d6141f6ffd8462d8b1566459c3b5d8092cc381d32aeec49dfbb35f988fca5"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.095035 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw69k" event={"ID":"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104","Type":"ContainerDied","Data":"e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd"} Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.095066 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.133413 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.143132 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.144938 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pvfdl_44760e51-861e-4283-9593-8832b2d55847/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.145496 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225260 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-catalog-content\") pod \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225319 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5pd4\" (UniqueName: \"kubernetes.io/projected/4d7e321a-a057-40e4-9826-4d9b8b46b30a-kube-api-access-v5pd4\") pod \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225343 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z826h\" (UniqueName: \"kubernetes.io/projected/44760e51-861e-4283-9593-8832b2d55847-kube-api-access-z826h\") pod \"44760e51-861e-4283-9593-8832b2d55847\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225384 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcqd4\" (UniqueName: \"kubernetes.io/projected/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-kube-api-access-wcqd4\") pod \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225401 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-utilities\") pod \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\" (UID: \"aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225432 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-utilities\") pod \"44760e51-861e-4283-9593-8832b2d55847\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225453 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-operator-metrics\") pod \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225482 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-catalog-content\") pod \"44760e51-861e-4283-9593-8832b2d55847\" (UID: \"44760e51-861e-4283-9593-8832b2d55847\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.225500 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-trusted-ca\") pod \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\" (UID: \"4d7e321a-a057-40e4-9826-4d9b8b46b30a\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.226513 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4d7e321a-a057-40e4-9826-4d9b8b46b30a" (UID: "4d7e321a-a057-40e4-9826-4d9b8b46b30a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.227165 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-utilities" (OuterVolumeSpecName: "utilities") pod "aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" (UID: "aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.227912 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-utilities" (OuterVolumeSpecName: "utilities") pod "44760e51-861e-4283-9593-8832b2d55847" (UID: "44760e51-861e-4283-9593-8832b2d55847"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.241174 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44760e51-861e-4283-9593-8832b2d55847-kube-api-access-z826h" (OuterVolumeSpecName: "kube-api-access-z826h") pod "44760e51-861e-4283-9593-8832b2d55847" (UID: "44760e51-861e-4283-9593-8832b2d55847"). InnerVolumeSpecName "kube-api-access-z826h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.244342 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-kube-api-access-wcqd4" (OuterVolumeSpecName: "kube-api-access-wcqd4") pod "aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" (UID: "aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104"). InnerVolumeSpecName "kube-api-access-wcqd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.245132 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7e321a-a057-40e4-9826-4d9b8b46b30a-kube-api-access-v5pd4" (OuterVolumeSpecName: "kube-api-access-v5pd4") pod "4d7e321a-a057-40e4-9826-4d9b8b46b30a" (UID: "4d7e321a-a057-40e4-9826-4d9b8b46b30a"). InnerVolumeSpecName "kube-api-access-v5pd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.253532 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44760e51-861e-4283-9593-8832b2d55847" (UID: "44760e51-861e-4283-9593-8832b2d55847"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.258467 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4d7e321a-a057-40e4-9826-4d9b8b46b30a" (UID: "4d7e321a-a057-40e4-9826-4d9b8b46b30a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.296469 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" (UID: "aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.308961 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r887w_9d0229a3-015e-43bf-bcb3-32088de6e95c/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.309628 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.314190 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qql6g_fb2f5594-0942-47ab-be72-76cc52b73a6d/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.314607 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.319088 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ktlrw_43675eea-b514-472a-9f19-d93ec4ddf044/registry-server/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.319759 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327251 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327272 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5pd4\" (UniqueName: \"kubernetes.io/projected/4d7e321a-a057-40e4-9826-4d9b8b46b30a-kube-api-access-v5pd4\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327282 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z826h\" (UniqueName: \"kubernetes.io/projected/44760e51-861e-4283-9593-8832b2d55847-kube-api-access-z826h\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327291 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327299 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcqd4\" (UniqueName: \"kubernetes.io/projected/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104-kube-api-access-wcqd4\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327308 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327316 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327326 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44760e51-861e-4283-9593-8832b2d55847-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.327335 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d7e321a-a057-40e4-9826-4d9b8b46b30a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.328886 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wnr8n_c7b87096-547e-442b-9701-4e6a222ce547/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.329245 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.332838 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q545b_b5076053-c905-48d9-b10a-c720e82c9cee/extract-content/0.log" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.333216 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.399364 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k98tl"] Jan 22 15:27:36 crc kubenswrapper[4825]: W0122 15:27:36.420341 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c53d3cf_ed7c_4579_a577_9e19ffb5d58e.slice/crio-15e526262f74e3bd962492a1bd4cf6b37f74d98de4d65d368a74a0a706163e0e WatchSource:0}: Error finding container 15e526262f74e3bd962492a1bd4cf6b37f74d98de4d65d368a74a0a706163e0e: Status 404 returned error can't find the container with id 15e526262f74e3bd962492a1bd4cf6b37f74d98de4d65d368a74a0a706163e0e Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427675 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-catalog-content\") pod \"fb2f5594-0942-47ab-be72-76cc52b73a6d\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427762 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb9l7\" (UniqueName: \"kubernetes.io/projected/43675eea-b514-472a-9f19-d93ec4ddf044-kube-api-access-xb9l7\") pod \"43675eea-b514-472a-9f19-d93ec4ddf044\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427793 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/fb2f5594-0942-47ab-be72-76cc52b73a6d-kube-api-access-59d6q\") pod \"fb2f5594-0942-47ab-be72-76cc52b73a6d\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427819 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-utilities\") pod \"9d0229a3-015e-43bf-bcb3-32088de6e95c\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427842 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llxd2\" (UniqueName: \"kubernetes.io/projected/9d0229a3-015e-43bf-bcb3-32088de6e95c-kube-api-access-llxd2\") pod \"9d0229a3-015e-43bf-bcb3-32088de6e95c\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427922 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-catalog-content\") pod \"9d0229a3-015e-43bf-bcb3-32088de6e95c\" (UID: \"9d0229a3-015e-43bf-bcb3-32088de6e95c\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427957 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-utilities\") pod \"43675eea-b514-472a-9f19-d93ec4ddf044\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.427997 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-utilities\") pod \"fb2f5594-0942-47ab-be72-76cc52b73a6d\" (UID: \"fb2f5594-0942-47ab-be72-76cc52b73a6d\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.428699 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-utilities" (OuterVolumeSpecName: "utilities") pod "9d0229a3-015e-43bf-bcb3-32088de6e95c" (UID: "9d0229a3-015e-43bf-bcb3-32088de6e95c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.429496 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-utilities" (OuterVolumeSpecName: "utilities") pod "fb2f5594-0942-47ab-be72-76cc52b73a6d" (UID: "fb2f5594-0942-47ab-be72-76cc52b73a6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.428045 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-catalog-content\") pod \"43675eea-b514-472a-9f19-d93ec4ddf044\" (UID: \"43675eea-b514-472a-9f19-d93ec4ddf044\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.430921 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43675eea-b514-472a-9f19-d93ec4ddf044-kube-api-access-xb9l7" (OuterVolumeSpecName: "kube-api-access-xb9l7") pod "43675eea-b514-472a-9f19-d93ec4ddf044" (UID: "43675eea-b514-472a-9f19-d93ec4ddf044"). InnerVolumeSpecName "kube-api-access-xb9l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.431759 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-utilities" (OuterVolumeSpecName: "utilities") pod "43675eea-b514-472a-9f19-d93ec4ddf044" (UID: "43675eea-b514-472a-9f19-d93ec4ddf044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.432723 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2f5594-0942-47ab-be72-76cc52b73a6d-kube-api-access-59d6q" (OuterVolumeSpecName: "kube-api-access-59d6q") pod "fb2f5594-0942-47ab-be72-76cc52b73a6d" (UID: "fb2f5594-0942-47ab-be72-76cc52b73a6d"). InnerVolumeSpecName "kube-api-access-59d6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.433907 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0229a3-015e-43bf-bcb3-32088de6e95c-kube-api-access-llxd2" (OuterVolumeSpecName: "kube-api-access-llxd2") pod "9d0229a3-015e-43bf-bcb3-32088de6e95c" (UID: "9d0229a3-015e-43bf-bcb3-32088de6e95c"). InnerVolumeSpecName "kube-api-access-llxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.442547 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.442589 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.442602 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb9l7\" (UniqueName: \"kubernetes.io/projected/43675eea-b514-472a-9f19-d93ec4ddf044-kube-api-access-xb9l7\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.442614 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/fb2f5594-0942-47ab-be72-76cc52b73a6d-kube-api-access-59d6q\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.442624 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.442635 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llxd2\" (UniqueName: \"kubernetes.io/projected/9d0229a3-015e-43bf-bcb3-32088de6e95c-kube-api-access-llxd2\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.448411 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb2f5594-0942-47ab-be72-76cc52b73a6d" (UID: "fb2f5594-0942-47ab-be72-76cc52b73a6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.452394 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d0229a3-015e-43bf-bcb3-32088de6e95c" (UID: "9d0229a3-015e-43bf-bcb3-32088de6e95c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.483839 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43675eea-b514-472a-9f19-d93ec4ddf044" (UID: "43675eea-b514-472a-9f19-d93ec4ddf044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.543189 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pd6q\" (UniqueName: \"kubernetes.io/projected/b5076053-c905-48d9-b10a-c720e82c9cee-kube-api-access-5pd6q\") pod \"b5076053-c905-48d9-b10a-c720e82c9cee\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.543270 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-utilities\") pod \"b5076053-c905-48d9-b10a-c720e82c9cee\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.543327 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-catalog-content\") pod \"b5076053-c905-48d9-b10a-c720e82c9cee\" (UID: \"b5076053-c905-48d9-b10a-c720e82c9cee\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.543369 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-utilities\") pod \"c7b87096-547e-442b-9701-4e6a222ce547\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.543420 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szwxg\" (UniqueName: \"kubernetes.io/projected/c7b87096-547e-442b-9701-4e6a222ce547-kube-api-access-szwxg\") pod \"c7b87096-547e-442b-9701-4e6a222ce547\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.543453 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-catalog-content\") pod \"c7b87096-547e-442b-9701-4e6a222ce547\" (UID: \"c7b87096-547e-442b-9701-4e6a222ce547\") " Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.544237 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0229a3-015e-43bf-bcb3-32088de6e95c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.544265 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43675eea-b514-472a-9f19-d93ec4ddf044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.544279 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb2f5594-0942-47ab-be72-76cc52b73a6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.544432 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-utilities" (OuterVolumeSpecName: "utilities") pod "c7b87096-547e-442b-9701-4e6a222ce547" (UID: "c7b87096-547e-442b-9701-4e6a222ce547"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.544464 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-utilities" (OuterVolumeSpecName: "utilities") pod "b5076053-c905-48d9-b10a-c720e82c9cee" (UID: "b5076053-c905-48d9-b10a-c720e82c9cee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.546699 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5076053-c905-48d9-b10a-c720e82c9cee-kube-api-access-5pd6q" (OuterVolumeSpecName: "kube-api-access-5pd6q") pod "b5076053-c905-48d9-b10a-c720e82c9cee" (UID: "b5076053-c905-48d9-b10a-c720e82c9cee"). InnerVolumeSpecName "kube-api-access-5pd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.546997 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b87096-547e-442b-9701-4e6a222ce547-kube-api-access-szwxg" (OuterVolumeSpecName: "kube-api-access-szwxg") pod "c7b87096-547e-442b-9701-4e6a222ce547" (UID: "c7b87096-547e-442b-9701-4e6a222ce547"). InnerVolumeSpecName "kube-api-access-szwxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.559827 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7b87096-547e-442b-9701-4e6a222ce547" (UID: "c7b87096-547e-442b-9701-4e6a222ce547"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.563190 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5076053-c905-48d9-b10a-c720e82c9cee" (UID: "b5076053-c905-48d9-b10a-c720e82c9cee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.645522 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szwxg\" (UniqueName: \"kubernetes.io/projected/c7b87096-547e-442b-9701-4e6a222ce547-kube-api-access-szwxg\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.645566 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.645580 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pd6q\" (UniqueName: \"kubernetes.io/projected/b5076053-c905-48d9-b10a-c720e82c9cee-kube-api-access-5pd6q\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.645593 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.645606 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5076053-c905-48d9-b10a-c720e82c9cee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:36 crc kubenswrapper[4825]: I0122 15:27:36.645621 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b87096-547e-442b-9701-4e6a222ce547-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.107542 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-r887w_9d0229a3-015e-43bf-bcb3-32088de6e95c/extract-content/0.log" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.108067 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r887w" event={"ID":"9d0229a3-015e-43bf-bcb3-32088de6e95c","Type":"ContainerDied","Data":"3ac038a4d32c6617a7d439c1f35438f2805135201f9eaef31f38977d6dabc2e9"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.108102 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r887w" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.108107 4825 scope.go:117] "RemoveContainer" containerID="483d6141f6ffd8462d8b1566459c3b5d8092cc381d32aeec49dfbb35f988fca5" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.110221 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qql6g_fb2f5594-0942-47ab-be72-76cc52b73a6d/extract-content/0.log" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.111821 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qql6g" event={"ID":"fb2f5594-0942-47ab-be72-76cc52b73a6d","Type":"ContainerDied","Data":"8603fd8382e55dfdfd279509be994542cdf26a6037ec1963c88e6cb4f17bf971"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.111921 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qql6g" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.115061 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wnr8n_c7b87096-547e-442b-9701-4e6a222ce547/extract-content/0.log" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.116654 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnr8n" event={"ID":"c7b87096-547e-442b-9701-4e6a222ce547","Type":"ContainerDied","Data":"c7fd54f869b6c06955a89b9d5cf3a80b4e3a8c84e8ebac44bfffb9a66325e943"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.116772 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnr8n" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.118470 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ktlrw_43675eea-b514-472a-9f19-d93ec4ddf044/registry-server/0.log" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.119372 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktlrw" event={"ID":"43675eea-b514-472a-9f19-d93ec4ddf044","Type":"ContainerDied","Data":"4038b9197b5ba86e9d2f5c3818f4406ba7ea92a471b51adab61ea8b006986ee0"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.119452 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktlrw" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.126193 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q545b_b5076053-c905-48d9-b10a-c720e82c9cee/extract-content/0.log" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.126598 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q545b" event={"ID":"b5076053-c905-48d9-b10a-c720e82c9cee","Type":"ContainerDied","Data":"c0e8d1114e9578a59ac9d40667bc8074bf04a1ffe1b0df35dd41206ae0ae3b97"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.126700 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q545b" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.131744 4825 scope.go:117] "RemoveContainer" containerID="3bbb894a0610dbc4a3f082274a17e142ddc9ee58096e09ed835f64cb6d070446" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.134718 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" event={"ID":"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e","Type":"ContainerStarted","Data":"13ae2774065d9c2a8dde80ec5ad94056cd3f4e2c671b3c8ff1316d13fa010048"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.134802 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" event={"ID":"3c53d3cf-ed7c-4579-a577-9e19ffb5d58e","Type":"ContainerStarted","Data":"15e526262f74e3bd962492a1bd4cf6b37f74d98de4d65d368a74a0a706163e0e"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.135031 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.140443 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pvfdl_44760e51-861e-4283-9593-8832b2d55847/extract-content/0.log" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.141064 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kv4vj" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.141341 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k98tl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.141394 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" podUID="3c53d3cf-ed7c-4579-a577-9e19ffb5d58e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.141791 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvfdl" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.141680 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvfdl" event={"ID":"44760e51-861e-4283-9593-8832b2d55847","Type":"ContainerDied","Data":"5dac78a7a67c1029586e043370afd44aa79af2fbc54dfe42d8daf6822ed853d2"} Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.142345 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw69k" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.158551 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" podStartSLOduration=3.158528919 podStartE2EDuration="3.158528919s" podCreationTimestamp="2026-01-22 15:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:27:37.152385766 +0000 UTC m=+203.913912696" watchObservedRunningTime="2026-01-22 15:27:37.158528919 +0000 UTC m=+203.920055829" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.182561 4825 scope.go:117] "RemoveContainer" containerID="518b5686c119c19727845e6c7bf2de905e9dec9b537a7a02c478547da7449533" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.217465 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r887w"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.222024 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r887w"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.226525 4825 scope.go:117] "RemoveContainer" containerID="0581c484e8242e16672f8448dd2687c9c82573fb2f57cdf7073e2c3830fdcf6c" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.278936 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qql6g"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.289620 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qql6g"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.294878 4825 scope.go:117] "RemoveContainer" containerID="3ec609eb2dc9c093acded42d4fe4b76a13cc4f245a98175cb04fb09a960113d2" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.310533 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvfdl"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.320823 4825 scope.go:117] "RemoveContainer" containerID="de4ea97cb84d102ec9ab9befb91e6c504fecc3a058cce914003091c286d8ef0f" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.323194 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvfdl"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.326440 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw69k"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.331137 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sw69k"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.341683 4825 scope.go:117] "RemoveContainer" containerID="688c39fa9bc4ad063dd8542765955eac1790ed1c22680b7ff340285fe8a7ff7a" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.342582 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q545b"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.345889 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q545b"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.361870 4825 scope.go:117] "RemoveContainer" containerID="6ccd2f81d9656773367f0f8426f9d9389fafbd24842760da4029b97b3f06c4be" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.362006 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnr8n"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.369963 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnr8n"] Jan 22 15:27:37 crc kubenswrapper[4825]: E0122 15:27:37.379048 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb2f5594_0942_47ab_be72_76cc52b73a6d.slice/crio-8603fd8382e55dfdfd279509be994542cdf26a6037ec1963c88e6cb4f17bf971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5076053_c905_48d9_b10a_c720e82c9cee.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43675eea_b514_472a_9f19_d93ec4ddf044.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa62c6e7_0cb9_4c7e_8885_a2bf0d0e1104.slice/crio-e2defe39a630aa086b154d6bab4c422f8d0db5475bc4fad2f3d0a9b2b5912edd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb2f5594_0942_47ab_be72_76cc52b73a6d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d0229a3_015e_43bf_bcb3_32088de6e95c.slice/crio-3ac038a4d32c6617a7d439c1f35438f2805135201f9eaef31f38977d6dabc2e9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44760e51_861e_4283_9593_8832b2d55847.slice/crio-5dac78a7a67c1029586e043370afd44aa79af2fbc54dfe42d8daf6822ed853d2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43675eea_b514_472a_9f19_d93ec4ddf044.slice/crio-4038b9197b5ba86e9d2f5c3818f4406ba7ea92a471b51adab61ea8b006986ee0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5076053_c905_48d9_b10a_c720e82c9cee.slice/crio-c0e8d1114e9578a59ac9d40667bc8074bf04a1ffe1b0df35dd41206ae0ae3b97\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d0229a3_015e_43bf_bcb3_32088de6e95c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7b87096_547e_442b_9701_4e6a222ce547.slice\": RecentStats: unable to find data in memory cache]" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.387113 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktlrw"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.390783 4825 scope.go:117] "RemoveContainer" containerID="0448306a6563ea153e461d2f45c8adbf34bd44b42c47525cbc03f6149ffa68e0" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.392408 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ktlrw"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.396139 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kv4vj"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.406973 4825 scope.go:117] "RemoveContainer" containerID="24a9d85b0ced2d74bedfb415ef0f49e2e1d9bd66642a1f955b8c08245da99af8" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.407289 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kv4vj"] Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.430078 4825 scope.go:117] "RemoveContainer" containerID="fdc96b7fa9d4898de293842b61cbf959383cd70b5f5fdf6b9ef0fe788dae8fcb" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.440678 4825 scope.go:117] "RemoveContainer" containerID="383fd915197ab9120194789429ad6a5951c87c32ed86056d37f73983def17fde" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.451329 4825 scope.go:117] "RemoveContainer" containerID="d0ed022277d028a0d5b07756b32159f9c20f8d07f17e0c1da3af23062c3ea9b1" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.578656 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" path="/var/lib/kubelet/pods/43675eea-b514-472a-9f19-d93ec4ddf044/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.579556 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44760e51-861e-4283-9593-8832b2d55847" path="/var/lib/kubelet/pods/44760e51-861e-4283-9593-8832b2d55847/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.580313 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" path="/var/lib/kubelet/pods/4d7e321a-a057-40e4-9826-4d9b8b46b30a/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.581435 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" path="/var/lib/kubelet/pods/9d0229a3-015e-43bf-bcb3-32088de6e95c/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.582038 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" path="/var/lib/kubelet/pods/aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.582718 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" path="/var/lib/kubelet/pods/b5076053-c905-48d9-b10a-c720e82c9cee/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.583790 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b87096-547e-442b-9701-4e6a222ce547" path="/var/lib/kubelet/pods/c7b87096-547e-442b-9701-4e6a222ce547/volumes" Jan 22 15:27:37 crc kubenswrapper[4825]: I0122 15:27:37.584430 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" path="/var/lib/kubelet/pods/fb2f5594-0942-47ab-be72-76cc52b73a6d/volumes" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.161971 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k98tl" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641539 4825 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641768 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641779 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641788 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641796 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641808 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641814 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641821 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641827 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641837 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44760e51-861e-4283-9593-8832b2d55847" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641843 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="44760e51-861e-4283-9593-8832b2d55847" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641851 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641858 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641865 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641871 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641878 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641884 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641893 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641899 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641905 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641911 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641918 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b87096-547e-442b-9701-4e6a222ce547" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641924 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b87096-547e-442b-9701-4e6a222ce547" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641932 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="registry-server" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641938 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="registry-server" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641945 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b87096-547e-442b-9701-4e6a222ce547" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641950 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b87096-547e-442b-9701-4e6a222ce547" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641960 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641965 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.641971 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.641993 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerName="extract-utilities" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.642003 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44760e51-861e-4283-9593-8832b2d55847" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642009 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="44760e51-861e-4283-9593-8832b2d55847" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.642016 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="registry-server" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642021 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="registry-server" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642120 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7e321a-a057-40e4-9826-4d9b8b46b30a" containerName="marketplace-operator" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642130 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2f5594-0942-47ab-be72-76cc52b73a6d" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642138 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="43675eea-b514-472a-9f19-d93ec4ddf044" containerName="registry-server" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642147 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b87096-547e-442b-9701-4e6a222ce547" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642155 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa62c6e7-0cb9-4c7e-8885-a2bf0d0e1104" containerName="registry-server" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642167 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0229a3-015e-43bf-bcb3-32088de6e95c" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642176 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="44760e51-861e-4283-9593-8832b2d55847" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642186 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5076053-c905-48d9-b10a-c720e82c9cee" containerName="extract-content" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.642590 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.679527 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.715410 4825 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.715813 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01" gracePeriod=15 Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.715938 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf" gracePeriod=15 Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.715933 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3" gracePeriod=15 Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.715969 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f" gracePeriod=15 Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.715780 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761" gracePeriod=15 Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.716800 4825 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717018 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717029 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717041 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717047 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717055 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717061 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717069 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717075 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717086 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717093 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717100 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717106 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.717114 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717119 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717218 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717230 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717237 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717246 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717260 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.717466 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.783168 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:27cf3abbf8fd467e0024e29f4a1590ade73c4e616041027fc414be0d345fbddc\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:61565de83851ce1a60a7f5484dc89d16992896eb24005c0196eed44fc53d8e6a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1671130350},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0934f30eb8f9333151bdb8fb7ad24fe19bb186a20d28b0541182f909fb8f0145\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dac313fa046b5a0965a26ce6996a51a0a3a77668fdbe4a5e5beea707e8024a2f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202844902},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:f5cc57bade9e356b6af4211c07e49cde20c7cb921769b00c2cf9bf1a17bf76fc\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:f6f94e2a83937ff48dd2dc14f55325f6ee2d688985dc375d44cb7ae105f593d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1169599210},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.783687 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.783919 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.784199 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.784398 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.784420 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.787077 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.787311 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.787398 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.787426 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.787447 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889141 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889191 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889216 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889235 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889294 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889312 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889350 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889365 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889482 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889501 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889521 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.889558 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.959647 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.959712 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.975258 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.990909 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.990989 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.991021 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.991123 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.991132 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: I0122 15:27:38.991155 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:38 crc kubenswrapper[4825]: W0122 15:27:38.991280 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-feee0e8e28055cf7a842e1a9ecf6c9b7b99cd4dbb798f6a2f40bb4822274fe84 WatchSource:0}: Error finding container feee0e8e28055cf7a842e1a9ecf6c9b7b99cd4dbb798f6a2f40bb4822274fe84: Status 404 returned error can't find the container with id feee0e8e28055cf7a842e1a9ecf6c9b7b99cd4dbb798f6a2f40bb4822274fe84 Jan 22 15:27:38 crc kubenswrapper[4825]: E0122 15:27:38.994652 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d1720f7f9233c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 15:27:38.993165116 +0000 UTC m=+205.754692026,LastTimestamp:2026-01-22 15:27:38.993165116 +0000 UTC m=+205.754692026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.168456 4825 generic.go:334] "Generic (PLEG): container finished" podID="2c66dbaa-8b04-4f97-be82-717510c14a1c" containerID="27dd41f0de4000ea2769ce5f766e7e7f07b4bd2cbf69b107f31c41323ef7e7bd" exitCode=0 Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.168530 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c66dbaa-8b04-4f97-be82-717510c14a1c","Type":"ContainerDied","Data":"27dd41f0de4000ea2769ce5f766e7e7f07b4bd2cbf69b107f31c41323ef7e7bd"} Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.169247 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.169628 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.169900 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.169942 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"feee0e8e28055cf7a842e1a9ecf6c9b7b99cd4dbb798f6a2f40bb4822274fe84"} Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.172402 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.173687 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.174415 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01" exitCode=0 Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.174446 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf" exitCode=0 Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.174458 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3" exitCode=0 Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.174466 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f" exitCode=2 Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.174529 4825 scope.go:117] "RemoveContainer" containerID="4c9d8804bb0ac821560d26f26014182c8d3fab166efba798e90967314eace006" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.214316 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.214765 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.215166 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.215340 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.215620 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.215668 4825 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.215915 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.417393 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Jan 22 15:27:39 crc kubenswrapper[4825]: E0122 15:27:39.818777 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.913317 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.914165 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.914890 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:39 crc kubenswrapper[4825]: I0122 15:27:39.915267 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.181533 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539"} Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.182195 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.182446 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.182810 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.184500 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.458027 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.458596 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.458852 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.459121 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:40 crc kubenswrapper[4825]: E0122 15:27:40.620373 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.624709 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-var-lock\") pod \"2c66dbaa-8b04-4f97-be82-717510c14a1c\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.624770 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-kubelet-dir\") pod \"2c66dbaa-8b04-4f97-be82-717510c14a1c\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.624842 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c66dbaa-8b04-4f97-be82-717510c14a1c-kube-api-access\") pod \"2c66dbaa-8b04-4f97-be82-717510c14a1c\" (UID: \"2c66dbaa-8b04-4f97-be82-717510c14a1c\") " Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.624860 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "2c66dbaa-8b04-4f97-be82-717510c14a1c" (UID: "2c66dbaa-8b04-4f97-be82-717510c14a1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.624884 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2c66dbaa-8b04-4f97-be82-717510c14a1c" (UID: "2c66dbaa-8b04-4f97-be82-717510c14a1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.625039 4825 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.625055 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c66dbaa-8b04-4f97-be82-717510c14a1c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.629919 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c66dbaa-8b04-4f97-be82-717510c14a1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2c66dbaa-8b04-4f97-be82-717510c14a1c" (UID: "2c66dbaa-8b04-4f97-be82-717510c14a1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:40 crc kubenswrapper[4825]: I0122 15:27:40.726446 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c66dbaa-8b04-4f97-be82-717510c14a1c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.129833 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d1720f7f9233c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 15:27:38.993165116 +0000 UTC m=+205.754692026,LastTimestamp:2026-01-22 15:27:38.993165116 +0000 UTC m=+205.754692026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.168533 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.169374 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.169973 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.170245 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.170458 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.170832 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.201158 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c66dbaa-8b04-4f97-be82-717510c14a1c","Type":"ContainerDied","Data":"844e4742378f7b3744b7c18862e15c4118bd329bd10af18fec2a0127b02ab003"} Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.201211 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="844e4742378f7b3744b7c18862e15c4118bd329bd10af18fec2a0127b02ab003" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.201285 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.208227 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.209108 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761" exitCode=0 Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.209168 4825 scope.go:117] "RemoveContainer" containerID="f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.209431 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.219615 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.220003 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.220371 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.220911 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.225098 4825 scope.go:117] "RemoveContainer" containerID="3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.236015 4825 scope.go:117] "RemoveContainer" containerID="1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.251690 4825 scope.go:117] "RemoveContainer" containerID="bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.276905 4825 scope.go:117] "RemoveContainer" containerID="673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.290669 4825 scope.go:117] "RemoveContainer" containerID="cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.305578 4825 scope.go:117] "RemoveContainer" containerID="f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.306049 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\": container with ID starting with f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01 not found: ID does not exist" containerID="f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.306084 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01"} err="failed to get container status \"f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\": rpc error: code = NotFound desc = could not find container \"f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01\": container with ID starting with f7ce30ef4887e46213e3c51677b72d491fa6f17781bf1a0118078ce67625bc01 not found: ID does not exist" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.306111 4825 scope.go:117] "RemoveContainer" containerID="3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.306489 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\": container with ID starting with 3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf not found: ID does not exist" containerID="3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.306591 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf"} err="failed to get container status \"3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\": rpc error: code = NotFound desc = could not find container \"3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf\": container with ID starting with 3a14b85a01c57755196a084f2715c2406ab9536d67559284adb371f3cbca3abf not found: ID does not exist" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.306705 4825 scope.go:117] "RemoveContainer" containerID="1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.307137 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\": container with ID starting with 1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3 not found: ID does not exist" containerID="1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.307178 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3"} err="failed to get container status \"1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\": rpc error: code = NotFound desc = could not find container \"1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3\": container with ID starting with 1bcacc39344f02ebee762d635139271477b6c564aa4c273d98ccb7c8395335d3 not found: ID does not exist" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.307211 4825 scope.go:117] "RemoveContainer" containerID="bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.307458 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\": container with ID starting with bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f not found: ID does not exist" containerID="bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.307549 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f"} err="failed to get container status \"bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\": rpc error: code = NotFound desc = could not find container \"bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f\": container with ID starting with bae2d60b20915ee063f352e51fcb9f73ac8092e4bd2e3d362ea8f537b5ea020f not found: ID does not exist" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.307625 4825 scope.go:117] "RemoveContainer" containerID="673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.308058 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\": container with ID starting with 673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761 not found: ID does not exist" containerID="673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.308102 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761"} err="failed to get container status \"673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\": rpc error: code = NotFound desc = could not find container \"673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761\": container with ID starting with 673fa737a3cd075993400699aba49e7ec8413bc0d4cf0d0809b3ab950f11a761 not found: ID does not exist" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.308135 4825 scope.go:117] "RemoveContainer" containerID="cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf" Jan 22 15:27:41 crc kubenswrapper[4825]: E0122 15:27:41.308424 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\": container with ID starting with cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf not found: ID does not exist" containerID="cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.308454 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf"} err="failed to get container status \"cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\": rpc error: code = NotFound desc = could not find container \"cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf\": container with ID starting with cf84e75bc4bfcbd7f6cb3f904052a73542ae47623a118fc94d6f920472d160bf not found: ID does not exist" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333255 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333367 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333425 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333509 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333578 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333756 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.333801 4825 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.334002 4825 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.434633 4825 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.524175 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.525006 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.525368 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.525564 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:41 crc kubenswrapper[4825]: I0122 15:27:41.525778 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:42 crc kubenswrapper[4825]: E0122 15:27:42.221926 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="3.2s" Jan 22 15:27:42 crc kubenswrapper[4825]: E0122 15:27:42.546923 4825 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" volumeName="registry-storage" Jan 22 15:27:43 crc kubenswrapper[4825]: I0122 15:27:43.522827 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:43 crc kubenswrapper[4825]: I0122 15:27:43.523586 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:43 crc kubenswrapper[4825]: I0122 15:27:43.524031 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:43 crc kubenswrapper[4825]: I0122 15:27:43.524502 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:45 crc kubenswrapper[4825]: E0122 15:27:45.423582 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="6.4s" Jan 22 15:27:48 crc kubenswrapper[4825]: E0122 15:27:48.854423 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T15:27:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:27cf3abbf8fd467e0024e29f4a1590ade73c4e616041027fc414be0d345fbddc\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:61565de83851ce1a60a7f5484dc89d16992896eb24005c0196eed44fc53d8e6a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1671130350},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0934f30eb8f9333151bdb8fb7ad24fe19bb186a20d28b0541182f909fb8f0145\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dac313fa046b5a0965a26ce6996a51a0a3a77668fdbe4a5e5beea707e8024a2f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202844902},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:f5cc57bade9e356b6af4211c07e49cde20c7cb921769b00c2cf9bf1a17bf76fc\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:f6f94e2a83937ff48dd2dc14f55325f6ee2d688985dc375d44cb7ae105f593d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1169599210},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:48 crc kubenswrapper[4825]: E0122 15:27:48.854953 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:48 crc kubenswrapper[4825]: E0122 15:27:48.855155 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:48 crc kubenswrapper[4825]: E0122 15:27:48.855498 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:48 crc kubenswrapper[4825]: E0122 15:27:48.855681 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:48 crc kubenswrapper[4825]: E0122 15:27:48.855707 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 15:27:51 crc kubenswrapper[4825]: E0122 15:27:51.131292 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d1720f7f9233c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 15:27:38.993165116 +0000 UTC m=+205.754692026,LastTimestamp:2026-01-22 15:27:38.993165116 +0000 UTC m=+205.754692026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 15:27:51 crc kubenswrapper[4825]: E0122 15:27:51.824968 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="7s" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.277469 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.277561 4825 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68" exitCode=1 Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.277643 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68"} Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.278465 4825 scope.go:117] "RemoveContainer" containerID="0f891b55b3ad8323004f5a0210f6252ea07263a66e8d2fa98077cf2c95bb0a68" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.278787 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.279407 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.280445 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.280824 4825 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.516294 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.517266 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.517449 4825 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.517610 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.517763 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.528056 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.528094 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:27:52 crc kubenswrapper[4825]: E0122 15:27:52.528627 4825 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:52 crc kubenswrapper[4825]: I0122 15:27:52.529365 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:52 crc kubenswrapper[4825]: W0122 15:27:52.549087 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d17ff6f6ecf7a872f078e819f2b9634e698204e5199f9690524bf64d8a9c242a WatchSource:0}: Error finding container d17ff6f6ecf7a872f078e819f2b9634e698204e5199f9690524bf64d8a9c242a: Status 404 returned error can't find the container with id d17ff6f6ecf7a872f078e819f2b9634e698204e5199f9690524bf64d8a9c242a Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.291722 4825 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0e6632041a5700e68dd90b4ad53ab4e7e9a0fbd38788c6fc63efb5a3f9efa42d" exitCode=0 Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.291796 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0e6632041a5700e68dd90b4ad53ab4e7e9a0fbd38788c6fc63efb5a3f9efa42d"} Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.291842 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d17ff6f6ecf7a872f078e819f2b9634e698204e5199f9690524bf64d8a9c242a"} Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.292100 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.292113 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:27:53 crc kubenswrapper[4825]: E0122 15:27:53.292461 4825 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.292699 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.292913 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.293917 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.294301 4825 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.297328 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.297571 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"46dc8e785776a9c52e422c13aba59b34e607a8c768393e86bc5fc4fdfa5f4e12"} Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.298381 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.298542 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.298785 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.299136 4825 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.525095 4825 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.525428 4825 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.525704 4825 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.526080 4825 status_manager.go:851] "Failed to get status for pod" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:53 crc kubenswrapper[4825]: I0122 15:27:53.526233 4825 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Jan 22 15:27:54 crc kubenswrapper[4825]: I0122 15:27:54.316218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"688c8f83993e84c956b479800a16a2553b2f9f023232acf6e8fc2ccf9c2ce77e"} Jan 22 15:27:54 crc kubenswrapper[4825]: I0122 15:27:54.316262 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7cdf81e7313f7d144ba5d6cb4ef37c86c8103f64cc25e9fa371287a6a2c1a29d"} Jan 22 15:27:54 crc kubenswrapper[4825]: I0122 15:27:54.316274 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"74af8a651fc24cfc18ae9eafd893c0e30d58981399139f4f3a4fe91ff0fe40f6"} Jan 22 15:27:54 crc kubenswrapper[4825]: I0122 15:27:54.316284 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9a2fc7748c6390042cc1a8397608b69e235502beca8f87c7a13589f06f04d14d"} Jan 22 15:27:55 crc kubenswrapper[4825]: I0122 15:27:55.325358 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb0ee2c7de6f88f00724bb8efe1b16093d44e3ae0d539d54652dd982865f1c47"} Jan 22 15:27:55 crc kubenswrapper[4825]: I0122 15:27:55.325706 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:55 crc kubenswrapper[4825]: I0122 15:27:55.325861 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:27:55 crc kubenswrapper[4825]: I0122 15:27:55.325890 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:27:56 crc kubenswrapper[4825]: I0122 15:27:56.350082 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" podUID="3f90b820-57dd-4be0-9648-de26783bc914" containerName="oauth-openshift" containerID="cri-o://0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11" gracePeriod=15 Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.254478 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.339497 4825 generic.go:334] "Generic (PLEG): container finished" podID="3f90b820-57dd-4be0-9648-de26783bc914" containerID="0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11" exitCode=0 Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.339539 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" event={"ID":"3f90b820-57dd-4be0-9648-de26783bc914","Type":"ContainerDied","Data":"0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11"} Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.339564 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" event={"ID":"3f90b820-57dd-4be0-9648-de26783bc914","Type":"ContainerDied","Data":"87d705a39f6f429067200fa2f4f83ce06e9a93f7999abb20cdb63f3047033498"} Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.339588 4825 scope.go:117] "RemoveContainer" containerID="0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.339582 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f22rt" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.359087 4825 scope.go:117] "RemoveContainer" containerID="0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11" Jan 22 15:27:57 crc kubenswrapper[4825]: E0122 15:27:57.359549 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11\": container with ID starting with 0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11 not found: ID does not exist" containerID="0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.359606 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11"} err="failed to get container status \"0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11\": rpc error: code = NotFound desc = could not find container \"0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11\": container with ID starting with 0c7fde5db288981600f363e9750064411db8d228fc5fda67f2fb8b3361c1db11 not found: ID does not exist" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.419743 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-trusted-ca-bundle\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.419815 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-provider-selection\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.419858 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-login\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420078 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-session\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420202 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwgmq\" (UniqueName: \"kubernetes.io/projected/3f90b820-57dd-4be0-9648-de26783bc914-kube-api-access-cwgmq\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420249 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-service-ca\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420272 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-serving-cert\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420292 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-cliconfig\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420313 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-error\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420345 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-ocp-branding-template\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420384 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-router-certs\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420411 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f90b820-57dd-4be0-9648-de26783bc914-audit-dir\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421048 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.420662 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-idp-0-file-data\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421177 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-audit-policies\") pod \"3f90b820-57dd-4be0-9648-de26783bc914\" (UID: \"3f90b820-57dd-4be0-9648-de26783bc914\") " Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421186 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421226 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f90b820-57dd-4be0-9648-de26783bc914-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421361 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421519 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421531 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421541 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.421552 4825 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f90b820-57dd-4be0-9648-de26783bc914-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.422224 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.426190 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.427094 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.428480 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.432792 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.433154 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.433235 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f90b820-57dd-4be0-9648-de26783bc914-kube-api-access-cwgmq" (OuterVolumeSpecName: "kube-api-access-cwgmq") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "kube-api-access-cwgmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.433750 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.434015 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.437109 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3f90b820-57dd-4be0-9648-de26783bc914" (UID: "3f90b820-57dd-4be0-9648-de26783bc914"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522297 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522496 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522506 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522515 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522524 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522534 4825 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f90b820-57dd-4be0-9648-de26783bc914-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522544 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522555 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522564 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f90b820-57dd-4be0-9648-de26783bc914-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.522574 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwgmq\" (UniqueName: \"kubernetes.io/projected/3f90b820-57dd-4be0-9648-de26783bc914-kube-api-access-cwgmq\") on node \"crc\" DevicePath \"\"" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.529997 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.530049 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:57 crc kubenswrapper[4825]: I0122 15:27:57.535145 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:27:58 crc kubenswrapper[4825]: I0122 15:27:58.555649 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:27:59 crc kubenswrapper[4825]: I0122 15:27:59.584267 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:27:59 crc kubenswrapper[4825]: I0122 15:27:59.588606 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:28:00 crc kubenswrapper[4825]: I0122 15:28:00.333223 4825 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:28:00 crc kubenswrapper[4825]: I0122 15:28:00.355024 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:28:00 crc kubenswrapper[4825]: I0122 15:28:00.355055 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:28:00 crc kubenswrapper[4825]: I0122 15:28:00.361961 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:28:00 crc kubenswrapper[4825]: I0122 15:28:00.366673 4825 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e4cf0de3-196c-4f76-aae4-e33163176f08" Jan 22 15:28:01 crc kubenswrapper[4825]: I0122 15:28:01.359337 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:28:01 crc kubenswrapper[4825]: I0122 15:28:01.359369 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:28:01 crc kubenswrapper[4825]: I0122 15:28:01.362380 4825 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e4cf0de3-196c-4f76-aae4-e33163176f08" Jan 22 15:28:05 crc kubenswrapper[4825]: I0122 15:28:05.542214 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:28:05 crc kubenswrapper[4825]: I0122 15:28:05.542850 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:28:05 crc kubenswrapper[4825]: I0122 15:28:05.542941 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:28:05 crc kubenswrapper[4825]: I0122 15:28:05.544549 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:28:05 crc kubenswrapper[4825]: I0122 15:28:05.544640 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42" gracePeriod=600 Jan 22 15:28:06 crc kubenswrapper[4825]: I0122 15:28:06.387821 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42" exitCode=0 Jan 22 15:28:06 crc kubenswrapper[4825]: I0122 15:28:06.387903 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42"} Jan 22 15:28:06 crc kubenswrapper[4825]: I0122 15:28:06.388322 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"70136c7cc46f39bc356a97e0057511092c22deb2e74a289548614a289b601d0b"} Jan 22 15:28:08 crc kubenswrapper[4825]: I0122 15:28:08.549122 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 15:28:09 crc kubenswrapper[4825]: I0122 15:28:09.342909 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 15:28:09 crc kubenswrapper[4825]: I0122 15:28:09.555301 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 15:28:09 crc kubenswrapper[4825]: I0122 15:28:09.959952 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 15:28:10 crc kubenswrapper[4825]: I0122 15:28:10.125559 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 15:28:10 crc kubenswrapper[4825]: I0122 15:28:10.468733 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 15:28:10 crc kubenswrapper[4825]: I0122 15:28:10.610868 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 15:28:10 crc kubenswrapper[4825]: I0122 15:28:10.704750 4825 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 15:28:11 crc kubenswrapper[4825]: I0122 15:28:11.138777 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 15:28:11 crc kubenswrapper[4825]: I0122 15:28:11.738312 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 15:28:11 crc kubenswrapper[4825]: I0122 15:28:11.856192 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:11.999890 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.077755 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.183262 4825 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.379382 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.609773 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.622641 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.624904 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 15:28:12 crc kubenswrapper[4825]: I0122 15:28:12.647860 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.094459 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.111854 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.192432 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.193349 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.207137 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.271127 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.345181 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.396648 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.546095 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.638011 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.681665 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.715153 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.743576 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.774000 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 15:28:13 crc kubenswrapper[4825]: I0122 15:28:13.826991 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.068118 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.103860 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.105865 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.125654 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.133311 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.140054 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.170560 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.222200 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.227233 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.241144 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.263652 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.319544 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.333372 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.392673 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.420863 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.455230 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.533188 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.548667 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.630420 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.726605 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.732715 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.752328 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.848251 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.856719 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.869830 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.878421 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.989900 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.992485 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 15:28:14 crc kubenswrapper[4825]: I0122 15:28:14.995754 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.157254 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.195382 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.197006 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.327357 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.358074 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.378489 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.425767 4825 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.494817 4825 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.697109 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.799719 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.830944 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 15:28:15 crc kubenswrapper[4825]: I0122 15:28:15.871564 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.011391 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.062816 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.066687 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.082073 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.097174 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.139336 4825 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.161425 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.191316 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.217561 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.462170 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.479544 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.491831 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.508055 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.551255 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.601778 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.670915 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.781652 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.813692 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.891061 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.916940 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.985295 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 15:28:16 crc kubenswrapper[4825]: I0122 15:28:16.995687 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.029411 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.151673 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.198786 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.211735 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.240182 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.284275 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.292835 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.493761 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.495453 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.621746 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.821679 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.827858 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.912346 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.962818 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 15:28:17 crc kubenswrapper[4825]: I0122 15:28:17.971482 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.075017 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.102175 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.105774 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.148504 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.202131 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.387114 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.496924 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.577046 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.605533 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.644958 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.674958 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.687896 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.710451 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.882333 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.946046 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 15:28:18 crc kubenswrapper[4825]: I0122 15:28:18.949787 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.072327 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.083926 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.099700 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.190939 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.355947 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.461380 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.481387 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.498078 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.513698 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.575846 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.584040 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.640801 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.640884 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.641918 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.643863 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.791343 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.793671 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.809273 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.824382 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.836026 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.854231 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.861249 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 15:28:19 crc kubenswrapper[4825]: I0122 15:28:19.919714 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.039640 4825 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045033 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.0450137 podStartE2EDuration="42.0450137s" podCreationTimestamp="2026-01-22 15:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:28:00.345900464 +0000 UTC m=+227.107427374" watchObservedRunningTime="2026-01-22 15:28:20.0450137 +0000 UTC m=+246.806540610" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045629 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f22rt","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045672 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c59948947-n4d9g","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 15:28:20 crc kubenswrapper[4825]: E0122 15:28:20.045819 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f90b820-57dd-4be0-9648-de26783bc914" containerName="oauth-openshift" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045833 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f90b820-57dd-4be0-9648-de26783bc914" containerName="oauth-openshift" Jan 22 15:28:20 crc kubenswrapper[4825]: E0122 15:28:20.045852 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" containerName="installer" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045858 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" containerName="installer" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045957 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f90b820-57dd-4be0-9648-de26783bc914" containerName="oauth-openshift" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.045967 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c66dbaa-8b04-4f97-be82-717510c14a1c" containerName="installer" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.046236 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.046287 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e0b252c-291b-4c92-9f1a-f10e9026fcb4" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.046440 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.050840 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.051112 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.051523 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.051635 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.051914 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.052091 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.053945 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.054027 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.054041 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.053951 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.054143 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.054334 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.055821 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.062780 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.063097 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.067502 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.072560 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.077798 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.081564 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.081676 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.082997 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.082967161 podStartE2EDuration="20.082967161s" podCreationTimestamp="2026-01-22 15:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:28:20.075472847 +0000 UTC m=+246.836999757" watchObservedRunningTime="2026-01-22 15:28:20.082967161 +0000 UTC m=+246.844494071" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145405 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-session\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145457 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-login\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145490 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9c10be4-87d7-490a-8586-86771b87077b-audit-dir\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145518 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145544 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-error\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145574 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145599 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145621 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-router-certs\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145646 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145791 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-service-ca\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.145937 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.146050 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8q4\" (UniqueName: \"kubernetes.io/projected/e9c10be4-87d7-490a-8586-86771b87077b-kube-api-access-lf8q4\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.146088 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-audit-policies\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.146166 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.170002 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.212970 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.246945 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247019 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-router-certs\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247051 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247074 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-service-ca\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247100 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247124 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8q4\" (UniqueName: \"kubernetes.io/projected/e9c10be4-87d7-490a-8586-86771b87077b-kube-api-access-lf8q4\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247148 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-audit-policies\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247213 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-session\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247241 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-login\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247263 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9c10be4-87d7-490a-8586-86771b87077b-audit-dir\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247292 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247321 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-error\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.247346 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.248204 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9c10be4-87d7-490a-8586-86771b87077b-audit-dir\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.248689 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-service-ca\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.248831 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.248847 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-audit-policies\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.249390 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.252663 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.253011 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.253254 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-session\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.253850 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-login\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.254720 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.254959 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-user-template-error\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.255121 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.255352 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9c10be4-87d7-490a-8586-86771b87077b-v4-0-config-system-router-certs\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.263356 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8q4\" (UniqueName: \"kubernetes.io/projected/e9c10be4-87d7-490a-8586-86771b87077b-kube-api-access-lf8q4\") pod \"oauth-openshift-c59948947-n4d9g\" (UID: \"e9c10be4-87d7-490a-8586-86771b87077b\") " pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.348473 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.375171 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.547313 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.601690 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.602805 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c59948947-n4d9g"] Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.680939 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.735779 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.779420 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c59948947-n4d9g"] Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.802872 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.940491 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 15:28:20 crc kubenswrapper[4825]: I0122 15:28:20.940863 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.062144 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.211254 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.323350 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.394670 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.406172 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.424372 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.470786 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.482633 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-c59948947-n4d9g_e9c10be4-87d7-490a-8586-86771b87077b/oauth-openshift/0.log" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.482698 4825 generic.go:334] "Generic (PLEG): container finished" podID="e9c10be4-87d7-490a-8586-86771b87077b" containerID="3dcc56d367feaaffe1d39b7f2a9886bb80f13bd36380bc184b9af812bd9c2157" exitCode=255 Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.482754 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" event={"ID":"e9c10be4-87d7-490a-8586-86771b87077b","Type":"ContainerDied","Data":"3dcc56d367feaaffe1d39b7f2a9886bb80f13bd36380bc184b9af812bd9c2157"} Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.482799 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" event={"ID":"e9c10be4-87d7-490a-8586-86771b87077b","Type":"ContainerStarted","Data":"8b8fcc26aa33d49844ed554d7c3517c05ade9aeb36c9e6fafb8ef1d134d84313"} Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.483404 4825 scope.go:117] "RemoveContainer" containerID="3dcc56d367feaaffe1d39b7f2a9886bb80f13bd36380bc184b9af812bd9c2157" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.533033 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f90b820-57dd-4be0-9648-de26783bc914" path="/var/lib/kubelet/pods/3f90b820-57dd-4be0-9648-de26783bc914/volumes" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.535724 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.539540 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.575156 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.623483 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.667916 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.720029 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.720536 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 15:28:21 crc kubenswrapper[4825]: I0122 15:28:21.763884 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.058840 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.062219 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.381680 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.488961 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-c59948947-n4d9g_e9c10be4-87d7-490a-8586-86771b87077b/oauth-openshift/0.log" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.489037 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" event={"ID":"e9c10be4-87d7-490a-8586-86771b87077b","Type":"ContainerStarted","Data":"773c7110be2363b349caf3ad2a3a7a787b51ae8616e0ed5854a22d69e7ee0252"} Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.489543 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.498045 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.520287 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c59948947-n4d9g" podStartSLOduration=51.520264404 podStartE2EDuration="51.520264404s" podCreationTimestamp="2026-01-22 15:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:28:22.515445602 +0000 UTC m=+249.276972532" watchObservedRunningTime="2026-01-22 15:28:22.520264404 +0000 UTC m=+249.281791314" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.548611 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.563482 4825 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.563690 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539" gracePeriod=5 Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.577973 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.615576 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.659943 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.867641 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.930598 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 15:28:22 crc kubenswrapper[4825]: I0122 15:28:22.955710 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.041328 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.134093 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.244465 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.253512 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.531387 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.561702 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.571379 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.579837 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.725374 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.734717 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.809302 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.810969 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.816968 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.870512 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.875561 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 15:28:23 crc kubenswrapper[4825]: I0122 15:28:23.966938 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.061615 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.265676 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.317755 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.341491 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.508015 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.549212 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.608734 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.651715 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.739421 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.805324 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.891065 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 15:28:24 crc kubenswrapper[4825]: I0122 15:28:24.914378 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.078828 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.260443 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.272106 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.320235 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.360837 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.760882 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 15:28:25 crc kubenswrapper[4825]: I0122 15:28:25.940443 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 15:28:26 crc kubenswrapper[4825]: I0122 15:28:26.120258 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 15:28:26 crc kubenswrapper[4825]: I0122 15:28:26.191540 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 15:28:26 crc kubenswrapper[4825]: I0122 15:28:26.515671 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 15:28:26 crc kubenswrapper[4825]: I0122 15:28:26.549087 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 15:28:27 crc kubenswrapper[4825]: I0122 15:28:27.108253 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 15:28:27 crc kubenswrapper[4825]: I0122 15:28:27.309929 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 15:28:27 crc kubenswrapper[4825]: I0122 15:28:27.342198 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.154148 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.154247 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.317206 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.347590 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.347910 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.348066 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.348211 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.348305 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.347823 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.348060 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.348099 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.348391 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.349047 4825 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.349076 4825 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.349087 4825 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.349098 4825 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.355525 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.449767 4825 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.519341 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.519689 4825 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539" exitCode=137 Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.519809 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.519773 4825 scope.go:117] "RemoveContainer" containerID="bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.538819 4825 scope.go:117] "RemoveContainer" containerID="bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539" Jan 22 15:28:28 crc kubenswrapper[4825]: E0122 15:28:28.539357 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539\": container with ID starting with bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539 not found: ID does not exist" containerID="bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539" Jan 22 15:28:28 crc kubenswrapper[4825]: I0122 15:28:28.539396 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539"} err="failed to get container status \"bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539\": rpc error: code = NotFound desc = could not find container \"bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539\": container with ID starting with bb26d00512b7d70b4177b1be846278844aa8b3f0b5248da8a9f3fca0dc8fb539 not found: ID does not exist" Jan 22 15:28:29 crc kubenswrapper[4825]: I0122 15:28:29.525566 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 15:28:29 crc kubenswrapper[4825]: I0122 15:28:29.525893 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 15:28:29 crc kubenswrapper[4825]: I0122 15:28:29.534440 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 15:28:29 crc kubenswrapper[4825]: I0122 15:28:29.534488 4825 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="eeb321d8-e032-4926-a973-faa75ec5fc01" Jan 22 15:28:29 crc kubenswrapper[4825]: I0122 15:28:29.537649 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 15:28:29 crc kubenswrapper[4825]: I0122 15:28:29.537685 4825 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="eeb321d8-e032-4926-a973-faa75ec5fc01" Jan 22 15:28:55 crc kubenswrapper[4825]: I0122 15:28:55.744452 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7pg5"] Jan 22 15:28:55 crc kubenswrapper[4825]: I0122 15:28:55.745156 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" podUID="e57fb87b-8cec-4c88-a802-69631aef1a2e" containerName="controller-manager" containerID="cri-o://a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5" gracePeriod=30 Jan 22 15:28:55 crc kubenswrapper[4825]: I0122 15:28:55.866210 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k"] Jan 22 15:28:55 crc kubenswrapper[4825]: I0122 15:28:55.866718 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" podUID="72257f30-9f17-4974-aeec-0755be040824" containerName="route-controller-manager" containerID="cri-o://cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1" gracePeriod=30 Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.126820 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.205152 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-config\") pod \"e57fb87b-8cec-4c88-a802-69631aef1a2e\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.205236 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-client-ca\") pod \"e57fb87b-8cec-4c88-a802-69631aef1a2e\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.205278 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57fb87b-8cec-4c88-a802-69631aef1a2e-serving-cert\") pod \"e57fb87b-8cec-4c88-a802-69631aef1a2e\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.205323 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9vgv\" (UniqueName: \"kubernetes.io/projected/e57fb87b-8cec-4c88-a802-69631aef1a2e-kube-api-access-k9vgv\") pod \"e57fb87b-8cec-4c88-a802-69631aef1a2e\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.205344 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-proxy-ca-bundles\") pod \"e57fb87b-8cec-4c88-a802-69631aef1a2e\" (UID: \"e57fb87b-8cec-4c88-a802-69631aef1a2e\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.206400 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e57fb87b-8cec-4c88-a802-69631aef1a2e" (UID: "e57fb87b-8cec-4c88-a802-69631aef1a2e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.206833 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-config" (OuterVolumeSpecName: "config") pod "e57fb87b-8cec-4c88-a802-69631aef1a2e" (UID: "e57fb87b-8cec-4c88-a802-69631aef1a2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.207123 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "e57fb87b-8cec-4c88-a802-69631aef1a2e" (UID: "e57fb87b-8cec-4c88-a802-69631aef1a2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.213791 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57fb87b-8cec-4c88-a802-69631aef1a2e-kube-api-access-k9vgv" (OuterVolumeSpecName: "kube-api-access-k9vgv") pod "e57fb87b-8cec-4c88-a802-69631aef1a2e" (UID: "e57fb87b-8cec-4c88-a802-69631aef1a2e"). InnerVolumeSpecName "kube-api-access-k9vgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.216917 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57fb87b-8cec-4c88-a802-69631aef1a2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e57fb87b-8cec-4c88-a802-69631aef1a2e" (UID: "e57fb87b-8cec-4c88-a802-69631aef1a2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.248046 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306530 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwtt4\" (UniqueName: \"kubernetes.io/projected/72257f30-9f17-4974-aeec-0755be040824-kube-api-access-cwtt4\") pod \"72257f30-9f17-4974-aeec-0755be040824\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306587 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-client-ca\") pod \"72257f30-9f17-4974-aeec-0755be040824\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306617 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72257f30-9f17-4974-aeec-0755be040824-serving-cert\") pod \"72257f30-9f17-4974-aeec-0755be040824\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306640 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-config\") pod \"72257f30-9f17-4974-aeec-0755be040824\" (UID: \"72257f30-9f17-4974-aeec-0755be040824\") " Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306767 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306778 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306810 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57fb87b-8cec-4c88-a802-69631aef1a2e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306819 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9vgv\" (UniqueName: \"kubernetes.io/projected/e57fb87b-8cec-4c88-a802-69631aef1a2e-kube-api-access-k9vgv\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.306828 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e57fb87b-8cec-4c88-a802-69631aef1a2e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.307478 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-config" (OuterVolumeSpecName: "config") pod "72257f30-9f17-4974-aeec-0755be040824" (UID: "72257f30-9f17-4974-aeec-0755be040824"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.307587 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-client-ca" (OuterVolumeSpecName: "client-ca") pod "72257f30-9f17-4974-aeec-0755be040824" (UID: "72257f30-9f17-4974-aeec-0755be040824"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.309516 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72257f30-9f17-4974-aeec-0755be040824-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72257f30-9f17-4974-aeec-0755be040824" (UID: "72257f30-9f17-4974-aeec-0755be040824"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.310415 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72257f30-9f17-4974-aeec-0755be040824-kube-api-access-cwtt4" (OuterVolumeSpecName: "kube-api-access-cwtt4") pod "72257f30-9f17-4974-aeec-0755be040824" (UID: "72257f30-9f17-4974-aeec-0755be040824"). InnerVolumeSpecName "kube-api-access-cwtt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.409262 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwtt4\" (UniqueName: \"kubernetes.io/projected/72257f30-9f17-4974-aeec-0755be040824-kube-api-access-cwtt4\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.409340 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.409372 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72257f30-9f17-4974-aeec-0755be040824-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.409401 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72257f30-9f17-4974-aeec-0755be040824-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.680129 4825 generic.go:334] "Generic (PLEG): container finished" podID="72257f30-9f17-4974-aeec-0755be040824" containerID="cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1" exitCode=0 Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.680270 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.680260 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" event={"ID":"72257f30-9f17-4974-aeec-0755be040824","Type":"ContainerDied","Data":"cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1"} Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.682915 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k" event={"ID":"72257f30-9f17-4974-aeec-0755be040824","Type":"ContainerDied","Data":"cd87fc9a5352a354d0f3e7ac91453d10c8f7da4e5dd099c86c053faea289dd33"} Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.682961 4825 scope.go:117] "RemoveContainer" containerID="cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.686221 4825 generic.go:334] "Generic (PLEG): container finished" podID="e57fb87b-8cec-4c88-a802-69631aef1a2e" containerID="a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5" exitCode=0 Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.686279 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" event={"ID":"e57fb87b-8cec-4c88-a802-69631aef1a2e","Type":"ContainerDied","Data":"a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5"} Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.686315 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" event={"ID":"e57fb87b-8cec-4c88-a802-69631aef1a2e","Type":"ContainerDied","Data":"e900a7bb1f932c28e518998b40924e934eb93085cb2f086edb22d4f29dc2204d"} Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.686392 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7pg5" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.727187 4825 scope.go:117] "RemoveContainer" containerID="cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1" Jan 22 15:28:56 crc kubenswrapper[4825]: E0122 15:28:56.730171 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1\": container with ID starting with cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1 not found: ID does not exist" containerID="cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.730208 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1"} err="failed to get container status \"cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1\": rpc error: code = NotFound desc = could not find container \"cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1\": container with ID starting with cbb7c82899217e49d71ebe5507d332f194a9026a3b7a2b9f301a847f5bdbd0c1 not found: ID does not exist" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.730252 4825 scope.go:117] "RemoveContainer" containerID="a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.736757 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k"] Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.743225 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77j7k"] Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.759555 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7pg5"] Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.764209 4825 scope.go:117] "RemoveContainer" containerID="a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5" Jan 22 15:28:56 crc kubenswrapper[4825]: E0122 15:28:56.765104 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5\": container with ID starting with a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5 not found: ID does not exist" containerID="a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.765173 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5"} err="failed to get container status \"a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5\": rpc error: code = NotFound desc = could not find container \"a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5\": container with ID starting with a6f09725f581870e9bebfddddde99485092b314d27d4912234ff05c0551b63a5 not found: ID does not exist" Jan 22 15:28:56 crc kubenswrapper[4825]: I0122 15:28:56.766908 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7pg5"] Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.523963 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72257f30-9f17-4974-aeec-0755be040824" path="/var/lib/kubelet/pods/72257f30-9f17-4974-aeec-0755be040824/volumes" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.524919 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57fb87b-8cec-4c88-a802-69631aef1a2e" path="/var/lib/kubelet/pods/e57fb87b-8cec-4c88-a802-69631aef1a2e/volumes" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681429 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-5jf52"] Jan 22 15:28:57 crc kubenswrapper[4825]: E0122 15:28:57.681700 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681716 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 15:28:57 crc kubenswrapper[4825]: E0122 15:28:57.681732 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72257f30-9f17-4974-aeec-0755be040824" containerName="route-controller-manager" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681740 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="72257f30-9f17-4974-aeec-0755be040824" containerName="route-controller-manager" Jan 22 15:28:57 crc kubenswrapper[4825]: E0122 15:28:57.681750 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57fb87b-8cec-4c88-a802-69631aef1a2e" containerName="controller-manager" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681760 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57fb87b-8cec-4c88-a802-69631aef1a2e" containerName="controller-manager" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681906 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="72257f30-9f17-4974-aeec-0755be040824" containerName="route-controller-manager" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681919 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.681930 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57fb87b-8cec-4c88-a802-69631aef1a2e" containerName="controller-manager" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.682416 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.689021 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.689088 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.689140 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744"] Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.689288 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.689718 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.690299 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.690690 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.691516 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.693694 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.693883 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.694021 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.695952 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-5jf52"] Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.699521 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.699619 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.699784 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.703038 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744"] Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.705868 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.822814 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-config\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.822892 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-config\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.822943 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-client-ca\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.822993 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910c161c-013f-4170-acab-bfa4f2be5861-serving-cert\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.823039 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tdw\" (UniqueName: \"kubernetes.io/projected/910c161c-013f-4170-acab-bfa4f2be5861-kube-api-access-57tdw\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.823074 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.823109 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4vd\" (UniqueName: \"kubernetes.io/projected/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-kube-api-access-bd4vd\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.823453 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-serving-cert\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.823552 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-client-ca\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924766 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924823 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd4vd\" (UniqueName: \"kubernetes.io/projected/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-kube-api-access-bd4vd\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924842 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-serving-cert\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924864 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-client-ca\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924894 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-config\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924913 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-config\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924944 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-client-ca\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.924962 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910c161c-013f-4170-acab-bfa4f2be5861-serving-cert\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.925035 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57tdw\" (UniqueName: \"kubernetes.io/projected/910c161c-013f-4170-acab-bfa4f2be5861-kube-api-access-57tdw\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.926217 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-client-ca\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.926400 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.926715 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-client-ca\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.926805 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-config\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.927002 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-config\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.929826 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910c161c-013f-4170-acab-bfa4f2be5861-serving-cert\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.929947 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-serving-cert\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.954363 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd4vd\" (UniqueName: \"kubernetes.io/projected/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-kube-api-access-bd4vd\") pod \"controller-manager-74577df4c5-5jf52\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:57 crc kubenswrapper[4825]: I0122 15:28:57.955382 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57tdw\" (UniqueName: \"kubernetes.io/projected/910c161c-013f-4170-acab-bfa4f2be5861-kube-api-access-57tdw\") pod \"route-controller-manager-5c6ddf959-9k744\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.006418 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.014933 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.321216 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744"] Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.465724 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-5jf52"] Jan 22 15:28:58 crc kubenswrapper[4825]: W0122 15:28:58.468238 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d2b38b_9d82_4261_9805_21c3d9cc7c80.slice/crio-a7ae2bc1562bf17c6ea8463504204acca638e18282c56f18de070ae5c91dcf86 WatchSource:0}: Error finding container a7ae2bc1562bf17c6ea8463504204acca638e18282c56f18de070ae5c91dcf86: Status 404 returned error can't find the container with id a7ae2bc1562bf17c6ea8463504204acca638e18282c56f18de070ae5c91dcf86 Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.700154 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" event={"ID":"910c161c-013f-4170-acab-bfa4f2be5861","Type":"ContainerStarted","Data":"4aae2cec50f7940fcf0f44072a51b1ee15a82cd003f571ccc7de98a818d8a235"} Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.700218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" event={"ID":"910c161c-013f-4170-acab-bfa4f2be5861","Type":"ContainerStarted","Data":"43b52984d60e2b0119c538266f42d12cc18fd1a30eebab59c209cb8425c60cab"} Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.721192 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.724042 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" event={"ID":"a5d2b38b-9d82-4261-9805-21c3d9cc7c80","Type":"ContainerStarted","Data":"61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37"} Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.724083 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" event={"ID":"a5d2b38b-9d82-4261-9805-21c3d9cc7c80","Type":"ContainerStarted","Data":"a7ae2bc1562bf17c6ea8463504204acca638e18282c56f18de070ae5c91dcf86"} Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.724334 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.728622 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.745091 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" podStartSLOduration=2.745069292 podStartE2EDuration="2.745069292s" podCreationTimestamp="2026-01-22 15:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:28:58.737500863 +0000 UTC m=+285.499027793" watchObservedRunningTime="2026-01-22 15:28:58.745069292 +0000 UTC m=+285.506596202" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.755060 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" podStartSLOduration=2.75503954 podStartE2EDuration="2.75503954s" podCreationTimestamp="2026-01-22 15:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:28:58.752730113 +0000 UTC m=+285.514257043" watchObservedRunningTime="2026-01-22 15:28:58.75503954 +0000 UTC m=+285.516566460" Jan 22 15:28:58 crc kubenswrapper[4825]: I0122 15:28:58.988343 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:29:01 crc kubenswrapper[4825]: I0122 15:29:01.409204 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744"] Jan 22 15:29:01 crc kubenswrapper[4825]: I0122 15:29:01.749407 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" podUID="910c161c-013f-4170-acab-bfa4f2be5861" containerName="route-controller-manager" containerID="cri-o://4aae2cec50f7940fcf0f44072a51b1ee15a82cd003f571ccc7de98a818d8a235" gracePeriod=30 Jan 22 15:29:02 crc kubenswrapper[4825]: I0122 15:29:02.995632 4825 generic.go:334] "Generic (PLEG): container finished" podID="910c161c-013f-4170-acab-bfa4f2be5861" containerID="4aae2cec50f7940fcf0f44072a51b1ee15a82cd003f571ccc7de98a818d8a235" exitCode=0 Jan 22 15:29:02 crc kubenswrapper[4825]: I0122 15:29:02.995702 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" event={"ID":"910c161c-013f-4170-acab-bfa4f2be5861","Type":"ContainerDied","Data":"4aae2cec50f7940fcf0f44072a51b1ee15a82cd003f571ccc7de98a818d8a235"} Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.415815 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.445437 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc"] Jan 22 15:29:03 crc kubenswrapper[4825]: E0122 15:29:03.445665 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910c161c-013f-4170-acab-bfa4f2be5861" containerName="route-controller-manager" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.445682 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="910c161c-013f-4170-acab-bfa4f2be5861" containerName="route-controller-manager" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.445791 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="910c161c-013f-4170-acab-bfa4f2be5861" containerName="route-controller-manager" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.446222 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.455650 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc"] Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.494082 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57tdw\" (UniqueName: \"kubernetes.io/projected/910c161c-013f-4170-acab-bfa4f2be5861-kube-api-access-57tdw\") pod \"910c161c-013f-4170-acab-bfa4f2be5861\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.494207 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-client-ca\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.494235 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-serving-cert\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.494305 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gt4p\" (UniqueName: \"kubernetes.io/projected/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-kube-api-access-4gt4p\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.494447 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-config\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.499360 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910c161c-013f-4170-acab-bfa4f2be5861-kube-api-access-57tdw" (OuterVolumeSpecName: "kube-api-access-57tdw") pod "910c161c-013f-4170-acab-bfa4f2be5861" (UID: "910c161c-013f-4170-acab-bfa4f2be5861"). InnerVolumeSpecName "kube-api-access-57tdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.595801 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910c161c-013f-4170-acab-bfa4f2be5861-serving-cert\") pod \"910c161c-013f-4170-acab-bfa4f2be5861\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.595850 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-config\") pod \"910c161c-013f-4170-acab-bfa4f2be5861\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.595944 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-client-ca\") pod \"910c161c-013f-4170-acab-bfa4f2be5861\" (UID: \"910c161c-013f-4170-acab-bfa4f2be5861\") " Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596171 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gt4p\" (UniqueName: \"kubernetes.io/projected/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-kube-api-access-4gt4p\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596245 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-config\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596287 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-client-ca\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596313 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-serving-cert\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596392 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57tdw\" (UniqueName: \"kubernetes.io/projected/910c161c-013f-4170-acab-bfa4f2be5861-kube-api-access-57tdw\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596595 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-client-ca" (OuterVolumeSpecName: "client-ca") pod "910c161c-013f-4170-acab-bfa4f2be5861" (UID: "910c161c-013f-4170-acab-bfa4f2be5861"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.596615 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-config" (OuterVolumeSpecName: "config") pod "910c161c-013f-4170-acab-bfa4f2be5861" (UID: "910c161c-013f-4170-acab-bfa4f2be5861"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.597408 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-client-ca\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.597610 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-config\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.601534 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910c161c-013f-4170-acab-bfa4f2be5861-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "910c161c-013f-4170-acab-bfa4f2be5861" (UID: "910c161c-013f-4170-acab-bfa4f2be5861"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.601637 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-serving-cert\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.612886 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gt4p\" (UniqueName: \"kubernetes.io/projected/d9a7bf61-888a-4fdc-b9ce-2eaf73b85488-kube-api-access-4gt4p\") pod \"route-controller-manager-8575f45546-9lnnc\" (UID: \"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488\") " pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.696697 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910c161c-013f-4170-acab-bfa4f2be5861-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.696729 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.696737 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910c161c-013f-4170-acab-bfa4f2be5861-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:03 crc kubenswrapper[4825]: I0122 15:29:03.760582 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:04 crc kubenswrapper[4825]: I0122 15:29:04.005310 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" event={"ID":"910c161c-013f-4170-acab-bfa4f2be5861","Type":"ContainerDied","Data":"43b52984d60e2b0119c538266f42d12cc18fd1a30eebab59c209cb8425c60cab"} Jan 22 15:29:04 crc kubenswrapper[4825]: I0122 15:29:04.005359 4825 scope.go:117] "RemoveContainer" containerID="4aae2cec50f7940fcf0f44072a51b1ee15a82cd003f571ccc7de98a818d8a235" Jan 22 15:29:04 crc kubenswrapper[4825]: I0122 15:29:04.005474 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744" Jan 22 15:29:04 crc kubenswrapper[4825]: I0122 15:29:04.035649 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744"] Jan 22 15:29:04 crc kubenswrapper[4825]: I0122 15:29:04.038616 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6ddf959-9k744"] Jan 22 15:29:04 crc kubenswrapper[4825]: I0122 15:29:04.152244 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc"] Jan 22 15:29:05 crc kubenswrapper[4825]: I0122 15:29:05.020292 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" event={"ID":"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488","Type":"ContainerStarted","Data":"9a86e3195774cb45f93467878e39348182eda50af41afdbc5a499d27289e8482"} Jan 22 15:29:05 crc kubenswrapper[4825]: I0122 15:29:05.020335 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" event={"ID":"d9a7bf61-888a-4fdc-b9ce-2eaf73b85488","Type":"ContainerStarted","Data":"b683fad7b1844f51ccec4393847ce81e7ea9ca1ad3c79913ec796af3df0d873d"} Jan 22 15:29:05 crc kubenswrapper[4825]: I0122 15:29:05.020685 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:05 crc kubenswrapper[4825]: I0122 15:29:05.025469 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" Jan 22 15:29:05 crc kubenswrapper[4825]: I0122 15:29:05.067216 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8575f45546-9lnnc" podStartSLOduration=4.067194914 podStartE2EDuration="4.067194914s" podCreationTimestamp="2026-01-22 15:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:29:05.043623052 +0000 UTC m=+291.805149962" watchObservedRunningTime="2026-01-22 15:29:05.067194914 +0000 UTC m=+291.828721834" Jan 22 15:29:05 crc kubenswrapper[4825]: I0122 15:29:05.526942 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910c161c-013f-4170-acab-bfa4f2be5861" path="/var/lib/kubelet/pods/910c161c-013f-4170-acab-bfa4f2be5861/volumes" Jan 22 15:29:13 crc kubenswrapper[4825]: I0122 15:29:13.380678 4825 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.558791 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kf2vh"] Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.561750 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.564273 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.581933 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf2vh"] Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.756489 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-87fzs"] Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.757965 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.774109 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-utilities\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.774175 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfptc\" (UniqueName: \"kubernetes.io/projected/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-kube-api-access-rfptc\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.774196 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-utilities\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.774214 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbtw\" (UniqueName: \"kubernetes.io/projected/d4d396c4-dbe4-4672-af11-a5db1019b169-kube-api-access-lxbtw\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.774233 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-catalog-content\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.774293 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-catalog-content\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.777186 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87fzs"] Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.779686 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.875608 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-catalog-content\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.875674 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-utilities\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.875704 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfptc\" (UniqueName: \"kubernetes.io/projected/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-kube-api-access-rfptc\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.875739 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-utilities\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.875764 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbtw\" (UniqueName: \"kubernetes.io/projected/d4d396c4-dbe4-4672-af11-a5db1019b169-kube-api-access-lxbtw\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.875787 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-catalog-content\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.876562 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-catalog-content\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.877224 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-catalog-content\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.877445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-utilities\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.877664 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-utilities\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.900671 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfptc\" (UniqueName: \"kubernetes.io/projected/57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3-kube-api-access-rfptc\") pod \"community-operators-kf2vh\" (UID: \"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3\") " pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:15 crc kubenswrapper[4825]: I0122 15:29:15.901559 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbtw\" (UniqueName: \"kubernetes.io/projected/d4d396c4-dbe4-4672-af11-a5db1019b169-kube-api-access-lxbtw\") pod \"certified-operators-87fzs\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:16 crc kubenswrapper[4825]: I0122 15:29:16.083122 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:16 crc kubenswrapper[4825]: I0122 15:29:16.192897 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:16 crc kubenswrapper[4825]: I0122 15:29:16.585536 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87fzs"] Jan 22 15:29:16 crc kubenswrapper[4825]: I0122 15:29:16.662456 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf2vh"] Jan 22 15:29:16 crc kubenswrapper[4825]: W0122 15:29:16.672194 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e9db63_ab8b_4fa5_98c3_46cfd5e87fc3.slice/crio-f7f56913ba2b2d2ba7de3c3e6a032d88056dfaf38594fabf10efe6ef599b5005 WatchSource:0}: Error finding container f7f56913ba2b2d2ba7de3c3e6a032d88056dfaf38594fabf10efe6ef599b5005: Status 404 returned error can't find the container with id f7f56913ba2b2d2ba7de3c3e6a032d88056dfaf38594fabf10efe6ef599b5005 Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.154137 4825 generic.go:334] "Generic (PLEG): container finished" podID="57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3" containerID="e58302c7a9477ab60c38732125dfedb6c12d6cbf93d3c3d8c019bf9192e83a39" exitCode=0 Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.154751 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2vh" event={"ID":"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3","Type":"ContainerDied","Data":"e58302c7a9477ab60c38732125dfedb6c12d6cbf93d3c3d8c019bf9192e83a39"} Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.154812 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2vh" event={"ID":"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3","Type":"ContainerStarted","Data":"f7f56913ba2b2d2ba7de3c3e6a032d88056dfaf38594fabf10efe6ef599b5005"} Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.158413 4825 generic.go:334] "Generic (PLEG): container finished" podID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerID="82890d3418884942fabd00b4e732428874d6bfc2486552e870161af11c92479b" exitCode=0 Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.158451 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerDied","Data":"82890d3418884942fabd00b4e732428874d6bfc2486552e870161af11c92479b"} Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.158470 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerStarted","Data":"bd78776f08e1c26833bf9573dfe66f10b2055d6799170a59f3a48ee70285ca81"} Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.356020 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l7f7x"] Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.357850 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.360052 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.363421 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7f7x"] Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.561706 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bcd0c8-9381-4b99-a083-b014af82df43-utilities\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.561743 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhqm\" (UniqueName: \"kubernetes.io/projected/30bcd0c8-9381-4b99-a083-b014af82df43-kube-api-access-nwhqm\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.562061 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bcd0c8-9381-4b99-a083-b014af82df43-catalog-content\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.662887 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bcd0c8-9381-4b99-a083-b014af82df43-utilities\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.662938 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhqm\" (UniqueName: \"kubernetes.io/projected/30bcd0c8-9381-4b99-a083-b014af82df43-kube-api-access-nwhqm\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.662975 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bcd0c8-9381-4b99-a083-b014af82df43-catalog-content\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.663459 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30bcd0c8-9381-4b99-a083-b014af82df43-catalog-content\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.663865 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30bcd0c8-9381-4b99-a083-b014af82df43-utilities\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.696089 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhqm\" (UniqueName: \"kubernetes.io/projected/30bcd0c8-9381-4b99-a083-b014af82df43-kube-api-access-nwhqm\") pod \"redhat-marketplace-l7f7x\" (UID: \"30bcd0c8-9381-4b99-a083-b014af82df43\") " pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:17 crc kubenswrapper[4825]: I0122 15:29:17.718881 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.133604 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7f7x"] Jan 22 15:29:18 crc kubenswrapper[4825]: W0122 15:29:18.139647 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30bcd0c8_9381_4b99_a083_b014af82df43.slice/crio-5e65e9d722d3a6ffb35872166c6aad7b43dce76187ce268c15ddf2cc8f1b2cae WatchSource:0}: Error finding container 5e65e9d722d3a6ffb35872166c6aad7b43dce76187ce268c15ddf2cc8f1b2cae: Status 404 returned error can't find the container with id 5e65e9d722d3a6ffb35872166c6aad7b43dce76187ce268c15ddf2cc8f1b2cae Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.161090 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qg9m6"] Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.163063 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.166342 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.169902 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2vh" event={"ID":"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3","Type":"ContainerStarted","Data":"7a6b6023320e95618f63078f796d912fd76c38430576d4ac722a4234aee3a886"} Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.172374 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72clb\" (UniqueName: \"kubernetes.io/projected/60dda316-e11c-4286-866e-52fa6e3db5f9-kube-api-access-72clb\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.172566 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dda316-e11c-4286-866e-52fa6e3db5f9-catalog-content\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.172685 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dda316-e11c-4286-866e-52fa6e3db5f9-utilities\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.173509 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerStarted","Data":"28610f6fde7b6a4caf0263126c1cc3490a03d0adceb72aa689969839f9ec9c6c"} Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.174680 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7f7x" event={"ID":"30bcd0c8-9381-4b99-a083-b014af82df43","Type":"ContainerStarted","Data":"5e65e9d722d3a6ffb35872166c6aad7b43dce76187ce268c15ddf2cc8f1b2cae"} Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.184124 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qg9m6"] Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.273497 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72clb\" (UniqueName: \"kubernetes.io/projected/60dda316-e11c-4286-866e-52fa6e3db5f9-kube-api-access-72clb\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.273550 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dda316-e11c-4286-866e-52fa6e3db5f9-catalog-content\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.273588 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dda316-e11c-4286-866e-52fa6e3db5f9-utilities\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.275397 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60dda316-e11c-4286-866e-52fa6e3db5f9-utilities\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.275812 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60dda316-e11c-4286-866e-52fa6e3db5f9-catalog-content\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.296140 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72clb\" (UniqueName: \"kubernetes.io/projected/60dda316-e11c-4286-866e-52fa6e3db5f9-kube-api-access-72clb\") pod \"redhat-operators-qg9m6\" (UID: \"60dda316-e11c-4286-866e-52fa6e3db5f9\") " pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:18 crc kubenswrapper[4825]: I0122 15:29:18.588720 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.181089 4825 generic.go:334] "Generic (PLEG): container finished" podID="30bcd0c8-9381-4b99-a083-b014af82df43" containerID="66e344673b2b7ba051cc372e769c19a228cfa3a0e60ffd72b31ba0f00d01aef6" exitCode=0 Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.181193 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7f7x" event={"ID":"30bcd0c8-9381-4b99-a083-b014af82df43","Type":"ContainerDied","Data":"66e344673b2b7ba051cc372e769c19a228cfa3a0e60ffd72b31ba0f00d01aef6"} Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.182835 4825 generic.go:334] "Generic (PLEG): container finished" podID="57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3" containerID="7a6b6023320e95618f63078f796d912fd76c38430576d4ac722a4234aee3a886" exitCode=0 Jan 22 15:29:19 crc kubenswrapper[4825]: W0122 15:29:19.182903 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60dda316_e11c_4286_866e_52fa6e3db5f9.slice/crio-e1b3fe091b3e78b1ab02382a675bf88cada51153f7b6fc510b1e4c4ad4688f42 WatchSource:0}: Error finding container e1b3fe091b3e78b1ab02382a675bf88cada51153f7b6fc510b1e4c4ad4688f42: Status 404 returned error can't find the container with id e1b3fe091b3e78b1ab02382a675bf88cada51153f7b6fc510b1e4c4ad4688f42 Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.182934 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2vh" event={"ID":"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3","Type":"ContainerDied","Data":"7a6b6023320e95618f63078f796d912fd76c38430576d4ac722a4234aee3a886"} Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.184527 4825 generic.go:334] "Generic (PLEG): container finished" podID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerID="28610f6fde7b6a4caf0263126c1cc3490a03d0adceb72aa689969839f9ec9c6c" exitCode=0 Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.184561 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerDied","Data":"28610f6fde7b6a4caf0263126c1cc3490a03d0adceb72aa689969839f9ec9c6c"} Jan 22 15:29:19 crc kubenswrapper[4825]: I0122 15:29:19.192930 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qg9m6"] Jan 22 15:29:20 crc kubenswrapper[4825]: I0122 15:29:20.191826 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7f7x" event={"ID":"30bcd0c8-9381-4b99-a083-b014af82df43","Type":"ContainerStarted","Data":"82654f14afa607a34260eea56f9b6d1c824052273f7a5c4756ebd268ab0fafc7"} Jan 22 15:29:20 crc kubenswrapper[4825]: I0122 15:29:20.194830 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2vh" event={"ID":"57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3","Type":"ContainerStarted","Data":"8f5ab350f7970f5ec68659ddc8aca66142989c7478fcf2a3c5174190793870c3"} Jan 22 15:29:20 crc kubenswrapper[4825]: I0122 15:29:20.198767 4825 generic.go:334] "Generic (PLEG): container finished" podID="60dda316-e11c-4286-866e-52fa6e3db5f9" containerID="d99a945e10b533a69a09dfad2545eba8d7c347e3d4e48c295935da7fbe01b520" exitCode=0 Jan 22 15:29:20 crc kubenswrapper[4825]: I0122 15:29:20.198969 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg9m6" event={"ID":"60dda316-e11c-4286-866e-52fa6e3db5f9","Type":"ContainerDied","Data":"d99a945e10b533a69a09dfad2545eba8d7c347e3d4e48c295935da7fbe01b520"} Jan 22 15:29:20 crc kubenswrapper[4825]: I0122 15:29:20.199085 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg9m6" event={"ID":"60dda316-e11c-4286-866e-52fa6e3db5f9","Type":"ContainerStarted","Data":"e1b3fe091b3e78b1ab02382a675bf88cada51153f7b6fc510b1e4c4ad4688f42"} Jan 22 15:29:20 crc kubenswrapper[4825]: I0122 15:29:20.268387 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kf2vh" podStartSLOduration=2.802800079 podStartE2EDuration="5.268370024s" podCreationTimestamp="2026-01-22 15:29:15 +0000 UTC" firstStartedPulling="2026-01-22 15:29:17.156855536 +0000 UTC m=+303.918382446" lastFinishedPulling="2026-01-22 15:29:19.622425471 +0000 UTC m=+306.383952391" observedRunningTime="2026-01-22 15:29:20.26752894 +0000 UTC m=+307.029055860" watchObservedRunningTime="2026-01-22 15:29:20.268370024 +0000 UTC m=+307.029896934" Jan 22 15:29:21 crc kubenswrapper[4825]: I0122 15:29:21.205625 4825 generic.go:334] "Generic (PLEG): container finished" podID="30bcd0c8-9381-4b99-a083-b014af82df43" containerID="82654f14afa607a34260eea56f9b6d1c824052273f7a5c4756ebd268ab0fafc7" exitCode=0 Jan 22 15:29:21 crc kubenswrapper[4825]: I0122 15:29:21.205786 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7f7x" event={"ID":"30bcd0c8-9381-4b99-a083-b014af82df43","Type":"ContainerDied","Data":"82654f14afa607a34260eea56f9b6d1c824052273f7a5c4756ebd268ab0fafc7"} Jan 22 15:29:21 crc kubenswrapper[4825]: I0122 15:29:21.208256 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerStarted","Data":"c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5"} Jan 22 15:29:21 crc kubenswrapper[4825]: I0122 15:29:21.258382 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-87fzs" podStartSLOduration=2.695794675 podStartE2EDuration="6.25836463s" podCreationTimestamp="2026-01-22 15:29:15 +0000 UTC" firstStartedPulling="2026-01-22 15:29:17.159713009 +0000 UTC m=+303.921239919" lastFinishedPulling="2026-01-22 15:29:20.722282964 +0000 UTC m=+307.483809874" observedRunningTime="2026-01-22 15:29:21.254895669 +0000 UTC m=+308.016422579" watchObservedRunningTime="2026-01-22 15:29:21.25836463 +0000 UTC m=+308.019891540" Jan 22 15:29:22 crc kubenswrapper[4825]: I0122 15:29:22.215840 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7f7x" event={"ID":"30bcd0c8-9381-4b99-a083-b014af82df43","Type":"ContainerStarted","Data":"781e31408478f2a7d0dd2ff56702b56c7baff67bdd0f231421c7f95f8d97fa68"} Jan 22 15:29:22 crc kubenswrapper[4825]: I0122 15:29:22.241388 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l7f7x" podStartSLOduration=2.808883454 podStartE2EDuration="5.241351912s" podCreationTimestamp="2026-01-22 15:29:17 +0000 UTC" firstStartedPulling="2026-01-22 15:29:19.193250917 +0000 UTC m=+305.954777827" lastFinishedPulling="2026-01-22 15:29:21.625719375 +0000 UTC m=+308.387246285" observedRunningTime="2026-01-22 15:29:22.238551611 +0000 UTC m=+309.000078521" watchObservedRunningTime="2026-01-22 15:29:22.241351912 +0000 UTC m=+309.002878822" Jan 22 15:29:23 crc kubenswrapper[4825]: I0122 15:29:23.222252 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg9m6" event={"ID":"60dda316-e11c-4286-866e-52fa6e3db5f9","Type":"ContainerStarted","Data":"af2b2c871764cde7ab507a09d9af3d10e9e9899751ff5f9ea76b6f60dfb1e118"} Jan 22 15:29:25 crc kubenswrapper[4825]: I0122 15:29:25.246954 4825 generic.go:334] "Generic (PLEG): container finished" podID="60dda316-e11c-4286-866e-52fa6e3db5f9" containerID="af2b2c871764cde7ab507a09d9af3d10e9e9899751ff5f9ea76b6f60dfb1e118" exitCode=0 Jan 22 15:29:25 crc kubenswrapper[4825]: I0122 15:29:25.247366 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg9m6" event={"ID":"60dda316-e11c-4286-866e-52fa6e3db5f9","Type":"ContainerDied","Data":"af2b2c871764cde7ab507a09d9af3d10e9e9899751ff5f9ea76b6f60dfb1e118"} Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.098803 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.099147 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.167330 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.193559 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.193619 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.228466 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.255574 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg9m6" event={"ID":"60dda316-e11c-4286-866e-52fa6e3db5f9","Type":"ContainerStarted","Data":"f7090a767008b28005b3c2e94230279267524b116ca03ab8da4a2fc7b547ef99"} Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.280037 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qg9m6" podStartSLOduration=2.630783686 podStartE2EDuration="8.280018566s" podCreationTimestamp="2026-01-22 15:29:18 +0000 UTC" firstStartedPulling="2026-01-22 15:29:20.201211692 +0000 UTC m=+306.962738602" lastFinishedPulling="2026-01-22 15:29:25.850446562 +0000 UTC m=+312.611973482" observedRunningTime="2026-01-22 15:29:26.278945825 +0000 UTC m=+313.040472755" watchObservedRunningTime="2026-01-22 15:29:26.280018566 +0000 UTC m=+313.041545476" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.293108 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kf2vh" Jan 22 15:29:26 crc kubenswrapper[4825]: I0122 15:29:26.303029 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:29:27 crc kubenswrapper[4825]: I0122 15:29:27.719233 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:28 crc kubenswrapper[4825]: I0122 15:29:28.004269 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:28 crc kubenswrapper[4825]: I0122 15:29:28.083073 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:28 crc kubenswrapper[4825]: I0122 15:29:28.311172 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l7f7x" Jan 22 15:29:28 crc kubenswrapper[4825]: I0122 15:29:28.589812 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:28 crc kubenswrapper[4825]: I0122 15:29:28.589861 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:29 crc kubenswrapper[4825]: I0122 15:29:29.623549 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qg9m6" podUID="60dda316-e11c-4286-866e-52fa6e3db5f9" containerName="registry-server" probeResult="failure" output=< Jan 22 15:29:29 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:29:29 crc kubenswrapper[4825]: > Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.431288 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qh4sm"] Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.433041 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.444007 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh4sm"] Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.594416 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8613a8ac-d68f-4ce7-b17b-ab85266760b3-utilities\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.594693 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8613a8ac-d68f-4ce7-b17b-ab85266760b3-catalog-content\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.594852 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xr8\" (UniqueName: \"kubernetes.io/projected/8613a8ac-d68f-4ce7-b17b-ab85266760b3-kube-api-access-r8xr8\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.631145 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cfn59"] Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.632386 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.641107 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfn59"] Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.696719 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8613a8ac-d68f-4ce7-b17b-ab85266760b3-catalog-content\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.697114 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xr8\" (UniqueName: \"kubernetes.io/projected/8613a8ac-d68f-4ce7-b17b-ab85266760b3-kube-api-access-r8xr8\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.697271 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8613a8ac-d68f-4ce7-b17b-ab85266760b3-utilities\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.697779 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8613a8ac-d68f-4ce7-b17b-ab85266760b3-utilities\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.697292 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8613a8ac-d68f-4ce7-b17b-ab85266760b3-catalog-content\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.717847 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xr8\" (UniqueName: \"kubernetes.io/projected/8613a8ac-d68f-4ce7-b17b-ab85266760b3-kube-api-access-r8xr8\") pod \"redhat-marketplace-qh4sm\" (UID: \"8613a8ac-d68f-4ce7-b17b-ab85266760b3\") " pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.749786 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.798390 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-catalog-content\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.798442 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88td\" (UniqueName: \"kubernetes.io/projected/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-kube-api-access-m88td\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.798471 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-utilities\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.900545 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-utilities\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.901007 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-catalog-content\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.901043 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m88td\" (UniqueName: \"kubernetes.io/projected/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-kube-api-access-m88td\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.901253 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-utilities\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.901415 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-catalog-content\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.923341 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m88td\" (UniqueName: \"kubernetes.io/projected/c6f1ab07-8476-48f7-8969-fd7bdba2fa71-kube-api-access-m88td\") pod \"community-operators-cfn59\" (UID: \"c6f1ab07-8476-48f7-8969-fd7bdba2fa71\") " pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:35 crc kubenswrapper[4825]: I0122 15:29:35.946507 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:36 crc kubenswrapper[4825]: I0122 15:29:36.062315 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-5jf52"] Jan 22 15:29:36 crc kubenswrapper[4825]: I0122 15:29:36.062600 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" podUID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" containerName="controller-manager" containerID="cri-o://61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37" gracePeriod=30 Jan 22 15:29:36 crc kubenswrapper[4825]: I0122 15:29:36.176326 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh4sm"] Jan 22 15:29:36 crc kubenswrapper[4825]: W0122 15:29:36.184689 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8613a8ac_d68f_4ce7_b17b_ab85266760b3.slice/crio-67c99ed207409aaf2236bed6dfc9032fef229b3b690f3ed04ffd798c3258728c WatchSource:0}: Error finding container 67c99ed207409aaf2236bed6dfc9032fef229b3b690f3ed04ffd798c3258728c: Status 404 returned error can't find the container with id 67c99ed207409aaf2236bed6dfc9032fef229b3b690f3ed04ffd798c3258728c Jan 22 15:29:36 crc kubenswrapper[4825]: I0122 15:29:36.310572 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh4sm" event={"ID":"8613a8ac-d68f-4ce7-b17b-ab85266760b3","Type":"ContainerStarted","Data":"67c99ed207409aaf2236bed6dfc9032fef229b3b690f3ed04ffd798c3258728c"} Jan 22 15:29:36 crc kubenswrapper[4825]: I0122 15:29:36.385235 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfn59"] Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.316604 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfn59" event={"ID":"c6f1ab07-8476-48f7-8969-fd7bdba2fa71","Type":"ContainerStarted","Data":"bf3a3e07204d540a0dae5efc65e126d7631d50d0d90daad601b9da4cd01c968d"} Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.831999 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j26fp"] Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.833290 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.847650 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j26fp"] Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.925834 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98246b0-1146-407d-99ba-0a8a93d3af50-utilities\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.926042 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtf8\" (UniqueName: \"kubernetes.io/projected/b98246b0-1146-407d-99ba-0a8a93d3af50-kube-api-access-trtf8\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:37 crc kubenswrapper[4825]: I0122 15:29:37.926103 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98246b0-1146-407d-99ba-0a8a93d3af50-catalog-content\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.007704 4825 patch_prober.go:28] interesting pod/controller-manager-74577df4c5-5jf52 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.007776 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" podUID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.027348 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtf8\" (UniqueName: \"kubernetes.io/projected/b98246b0-1146-407d-99ba-0a8a93d3af50-kube-api-access-trtf8\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.027407 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98246b0-1146-407d-99ba-0a8a93d3af50-catalog-content\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.027513 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98246b0-1146-407d-99ba-0a8a93d3af50-utilities\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.028314 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98246b0-1146-407d-99ba-0a8a93d3af50-utilities\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.028699 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98246b0-1146-407d-99ba-0a8a93d3af50-catalog-content\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.045632 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4mc8"] Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.047941 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.058879 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4mc8"] Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.060018 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtf8\" (UniqueName: \"kubernetes.io/projected/b98246b0-1146-407d-99ba-0a8a93d3af50-kube-api-access-trtf8\") pod \"redhat-operators-j26fp\" (UID: \"b98246b0-1146-407d-99ba-0a8a93d3af50\") " pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.128331 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-catalog-content\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.128386 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-utilities\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.128427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4mx\" (UniqueName: \"kubernetes.io/projected/3b091b06-afc3-4d38-9ad4-16003718f00e-kube-api-access-xv4mx\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.229744 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-utilities\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.229831 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4mx\" (UniqueName: \"kubernetes.io/projected/3b091b06-afc3-4d38-9ad4-16003718f00e-kube-api-access-xv4mx\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.229872 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-catalog-content\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.230248 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-utilities\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.230425 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-catalog-content\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.242259 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.246069 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4mx\" (UniqueName: \"kubernetes.io/projected/3b091b06-afc3-4d38-9ad4-16003718f00e-kube-api-access-xv4mx\") pod \"certified-operators-c4mc8\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.327305 4825 generic.go:334] "Generic (PLEG): container finished" podID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" containerID="61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37" exitCode=0 Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.327473 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.327507 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" event={"ID":"a5d2b38b-9d82-4261-9805-21c3d9cc7c80","Type":"ContainerDied","Data":"61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37"} Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.327652 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" event={"ID":"a5d2b38b-9d82-4261-9805-21c3d9cc7c80","Type":"ContainerDied","Data":"a7ae2bc1562bf17c6ea8463504204acca638e18282c56f18de070ae5c91dcf86"} Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.327673 4825 scope.go:117] "RemoveContainer" containerID="61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.334359 4825 generic.go:334] "Generic (PLEG): container finished" podID="c6f1ab07-8476-48f7-8969-fd7bdba2fa71" containerID="d04b3f54e45cf885b589b47186302b49eca56d09511ed2d3851427213cc4b878" exitCode=0 Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.334410 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfn59" event={"ID":"c6f1ab07-8476-48f7-8969-fd7bdba2fa71","Type":"ContainerDied","Data":"d04b3f54e45cf885b589b47186302b49eca56d09511ed2d3851427213cc4b878"} Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.350302 4825 generic.go:334] "Generic (PLEG): container finished" podID="8613a8ac-d68f-4ce7-b17b-ab85266760b3" containerID="95a00e4ff8973f04b2fc0b98bb6062cdb2a13d0548e11e4a4c82dc36f23dc9e7" exitCode=0 Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.350363 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh4sm" event={"ID":"8613a8ac-d68f-4ce7-b17b-ab85266760b3","Type":"ContainerDied","Data":"95a00e4ff8973f04b2fc0b98bb6062cdb2a13d0548e11e4a4c82dc36f23dc9e7"} Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.375194 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b4cc7bd68-nghl6"] Jan 22 15:29:38 crc kubenswrapper[4825]: E0122 15:29:38.375460 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" containerName="controller-manager" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.375480 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" containerName="controller-manager" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.375616 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" containerName="controller-manager" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.376075 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.385526 4825 scope.go:117] "RemoveContainer" containerID="61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37" Jan 22 15:29:38 crc kubenswrapper[4825]: E0122 15:29:38.386039 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37\": container with ID starting with 61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37 not found: ID does not exist" containerID="61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.386068 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37"} err="failed to get container status \"61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37\": rpc error: code = NotFound desc = could not find container \"61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37\": container with ID starting with 61459c278b9cf40e77307d11b60f66973b3df9ec7f6732146b116c0457647e37 not found: ID does not exist" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.388014 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4cc7bd68-nghl6"] Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.390465 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.437548 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-client-ca\") pod \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.437652 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd4vd\" (UniqueName: \"kubernetes.io/projected/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-kube-api-access-bd4vd\") pod \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.437713 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-proxy-ca-bundles\") pod \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.437744 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-config\") pod \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.437768 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-serving-cert\") pod \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\" (UID: \"a5d2b38b-9d82-4261-9805-21c3d9cc7c80\") " Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.438065 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-proxy-ca-bundles\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.438684 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a5d2b38b-9d82-4261-9805-21c3d9cc7c80" (UID: "a5d2b38b-9d82-4261-9805-21c3d9cc7c80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.438704 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-client-ca" (OuterVolumeSpecName: "client-ca") pod "a5d2b38b-9d82-4261-9805-21c3d9cc7c80" (UID: "a5d2b38b-9d82-4261-9805-21c3d9cc7c80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.438867 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-config" (OuterVolumeSpecName: "config") pod "a5d2b38b-9d82-4261-9805-21c3d9cc7c80" (UID: "a5d2b38b-9d82-4261-9805-21c3d9cc7c80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.438911 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-client-ca\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.439026 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqpf\" (UniqueName: \"kubernetes.io/projected/4be91409-6831-4a95-b402-888644988272-kube-api-access-jmqpf\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.439110 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be91409-6831-4a95-b402-888644988272-serving-cert\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.439270 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-config\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.439381 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.439395 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.439406 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.443133 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a5d2b38b-9d82-4261-9805-21c3d9cc7c80" (UID: "a5d2b38b-9d82-4261-9805-21c3d9cc7c80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.443172 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-kube-api-access-bd4vd" (OuterVolumeSpecName: "kube-api-access-bd4vd") pod "a5d2b38b-9d82-4261-9805-21c3d9cc7c80" (UID: "a5d2b38b-9d82-4261-9805-21c3d9cc7c80"). InnerVolumeSpecName "kube-api-access-bd4vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.476970 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j26fp"] Jan 22 15:29:38 crc kubenswrapper[4825]: W0122 15:29:38.496960 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb98246b0_1146_407d_99ba_0a8a93d3af50.slice/crio-5b66b29d99d65d7155b1561ca1c2f4653c3f5123f61724a74235793ef55e8d74 WatchSource:0}: Error finding container 5b66b29d99d65d7155b1561ca1c2f4653c3f5123f61724a74235793ef55e8d74: Status 404 returned error can't find the container with id 5b66b29d99d65d7155b1561ca1c2f4653c3f5123f61724a74235793ef55e8d74 Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.540866 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-proxy-ca-bundles\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.540995 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-client-ca\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.541029 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqpf\" (UniqueName: \"kubernetes.io/projected/4be91409-6831-4a95-b402-888644988272-kube-api-access-jmqpf\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.541056 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be91409-6831-4a95-b402-888644988272-serving-cert\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.541130 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-config\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.541176 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd4vd\" (UniqueName: \"kubernetes.io/projected/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-kube-api-access-bd4vd\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.541187 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d2b38b-9d82-4261-9805-21c3d9cc7c80-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.542873 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-config\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.545677 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-client-ca\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.549331 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4be91409-6831-4a95-b402-888644988272-proxy-ca-bundles\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.552046 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be91409-6831-4a95-b402-888644988272-serving-cert\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.583445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqpf\" (UniqueName: \"kubernetes.io/projected/4be91409-6831-4a95-b402-888644988272-kube-api-access-jmqpf\") pod \"controller-manager-b4cc7bd68-nghl6\" (UID: \"4be91409-6831-4a95-b402-888644988272\") " pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.687626 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.691425 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4mc8"] Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.697689 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:38 crc kubenswrapper[4825]: W0122 15:29:38.728090 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b091b06_afc3_4d38_9ad4_16003718f00e.slice/crio-9e125f2ed404df41a89cfd031927a336c50015686ae05a56a281d60d38d3a37b WatchSource:0}: Error finding container 9e125f2ed404df41a89cfd031927a336c50015686ae05a56a281d60d38d3a37b: Status 404 returned error can't find the container with id 9e125f2ed404df41a89cfd031927a336c50015686ae05a56a281d60d38d3a37b Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.751715 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qg9m6" Jan 22 15:29:38 crc kubenswrapper[4825]: I0122 15:29:38.983617 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4cc7bd68-nghl6"] Jan 22 15:29:38 crc kubenswrapper[4825]: W0122 15:29:38.997070 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4be91409_6831_4a95_b402_888644988272.slice/crio-8070d3ef2e29efe724f739d59b75bd13a16dcb6e83edf2b49d37bd7572083c7d WatchSource:0}: Error finding container 8070d3ef2e29efe724f739d59b75bd13a16dcb6e83edf2b49d37bd7572083c7d: Status 404 returned error can't find the container with id 8070d3ef2e29efe724f739d59b75bd13a16dcb6e83edf2b49d37bd7572083c7d Jan 22 15:29:39 crc kubenswrapper[4825]: E0122 15:29:39.020438 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b091b06_afc3_4d38_9ad4_16003718f00e.slice/crio-conmon-4d063c7264e8b26830dd2d8725e62b10e9fca7c519c1578f2368c87cd83d4563.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b091b06_afc3_4d38_9ad4_16003718f00e.slice/crio-4d063c7264e8b26830dd2d8725e62b10e9fca7c519c1578f2368c87cd83d4563.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.357880 4825 generic.go:334] "Generic (PLEG): container finished" podID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerID="4d063c7264e8b26830dd2d8725e62b10e9fca7c519c1578f2368c87cd83d4563" exitCode=0 Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.357948 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerDied","Data":"4d063c7264e8b26830dd2d8725e62b10e9fca7c519c1578f2368c87cd83d4563"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.358278 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerStarted","Data":"9e125f2ed404df41a89cfd031927a336c50015686ae05a56a281d60d38d3a37b"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.359909 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-5jf52" Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.361937 4825 generic.go:334] "Generic (PLEG): container finished" podID="b98246b0-1146-407d-99ba-0a8a93d3af50" containerID="f98ff7d56cd19cf5d14d483cfe4c36884ebacc939fae0f3587b3dd574f69101b" exitCode=0 Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.362009 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j26fp" event={"ID":"b98246b0-1146-407d-99ba-0a8a93d3af50","Type":"ContainerDied","Data":"f98ff7d56cd19cf5d14d483cfe4c36884ebacc939fae0f3587b3dd574f69101b"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.362082 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j26fp" event={"ID":"b98246b0-1146-407d-99ba-0a8a93d3af50","Type":"ContainerStarted","Data":"5b66b29d99d65d7155b1561ca1c2f4653c3f5123f61724a74235793ef55e8d74"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.364529 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfn59" event={"ID":"c6f1ab07-8476-48f7-8969-fd7bdba2fa71","Type":"ContainerStarted","Data":"ab4c21355a9d8557e949e7e5c4465ee5462f642d39f8d4b5dd7e13064e354b99"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.371616 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh4sm" event={"ID":"8613a8ac-d68f-4ce7-b17b-ab85266760b3","Type":"ContainerStarted","Data":"7c8468a98cd24663d55cbd8d49cc742d49a4737f897f8df2919e86786899fa82"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.373351 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" event={"ID":"4be91409-6831-4a95-b402-888644988272","Type":"ContainerStarted","Data":"6651f96b6cb5c25637b71f72acfe4166f2fb4cde477230676bb4e4909809ac7e"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.373406 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" event={"ID":"4be91409-6831-4a95-b402-888644988272","Type":"ContainerStarted","Data":"8070d3ef2e29efe724f739d59b75bd13a16dcb6e83edf2b49d37bd7572083c7d"} Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.373895 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.449967 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-5jf52"] Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.452735 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-5jf52"] Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.463365 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.501204 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b4cc7bd68-nghl6" podStartSLOduration=3.501189368 podStartE2EDuration="3.501189368s" podCreationTimestamp="2026-01-22 15:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:29:39.498647304 +0000 UTC m=+326.260174214" watchObservedRunningTime="2026-01-22 15:29:39.501189368 +0000 UTC m=+326.262716268" Jan 22 15:29:39 crc kubenswrapper[4825]: I0122 15:29:39.524763 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d2b38b-9d82-4261-9805-21c3d9cc7c80" path="/var/lib/kubelet/pods/a5d2b38b-9d82-4261-9805-21c3d9cc7c80/volumes" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.230597 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qk5"] Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.232346 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.242483 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qk5"] Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.304101 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfjr\" (UniqueName: \"kubernetes.io/projected/e06cceab-9530-4e72-b66b-5d8086ea4c51-kube-api-access-nnfjr\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.304153 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06cceab-9530-4e72-b66b-5d8086ea4c51-catalog-content\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.304187 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06cceab-9530-4e72-b66b-5d8086ea4c51-utilities\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.380737 4825 generic.go:334] "Generic (PLEG): container finished" podID="8613a8ac-d68f-4ce7-b17b-ab85266760b3" containerID="7c8468a98cd24663d55cbd8d49cc742d49a4737f897f8df2919e86786899fa82" exitCode=0 Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.380822 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh4sm" event={"ID":"8613a8ac-d68f-4ce7-b17b-ab85266760b3","Type":"ContainerDied","Data":"7c8468a98cd24663d55cbd8d49cc742d49a4737f897f8df2919e86786899fa82"} Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.382728 4825 generic.go:334] "Generic (PLEG): container finished" podID="c6f1ab07-8476-48f7-8969-fd7bdba2fa71" containerID="ab4c21355a9d8557e949e7e5c4465ee5462f642d39f8d4b5dd7e13064e354b99" exitCode=0 Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.382776 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfn59" event={"ID":"c6f1ab07-8476-48f7-8969-fd7bdba2fa71","Type":"ContainerDied","Data":"ab4c21355a9d8557e949e7e5c4465ee5462f642d39f8d4b5dd7e13064e354b99"} Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.405733 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnfjr\" (UniqueName: \"kubernetes.io/projected/e06cceab-9530-4e72-b66b-5d8086ea4c51-kube-api-access-nnfjr\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.405793 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06cceab-9530-4e72-b66b-5d8086ea4c51-catalog-content\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.405828 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06cceab-9530-4e72-b66b-5d8086ea4c51-utilities\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.406261 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06cceab-9530-4e72-b66b-5d8086ea4c51-utilities\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.406755 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06cceab-9530-4e72-b66b-5d8086ea4c51-catalog-content\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.426995 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnfjr\" (UniqueName: \"kubernetes.io/projected/e06cceab-9530-4e72-b66b-5d8086ea4c51-kube-api-access-nnfjr\") pod \"redhat-marketplace-r7qk5\" (UID: \"e06cceab-9530-4e72-b66b-5d8086ea4c51\") " pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.451551 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fkt56"] Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.453315 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.457481 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkt56"] Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.506652 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff904fd-281e-4583-9b04-bd906890ec8d-catalog-content\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.506774 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff904fd-281e-4583-9b04-bd906890ec8d-utilities\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.506932 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87t6q\" (UniqueName: \"kubernetes.io/projected/1ff904fd-281e-4583-9b04-bd906890ec8d-kube-api-access-87t6q\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.555684 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.608355 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87t6q\" (UniqueName: \"kubernetes.io/projected/1ff904fd-281e-4583-9b04-bd906890ec8d-kube-api-access-87t6q\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.608423 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff904fd-281e-4583-9b04-bd906890ec8d-catalog-content\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.608502 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff904fd-281e-4583-9b04-bd906890ec8d-utilities\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.609073 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff904fd-281e-4583-9b04-bd906890ec8d-utilities\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.609458 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff904fd-281e-4583-9b04-bd906890ec8d-catalog-content\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.636331 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87t6q\" (UniqueName: \"kubernetes.io/projected/1ff904fd-281e-4583-9b04-bd906890ec8d-kube-api-access-87t6q\") pod \"community-operators-fkt56\" (UID: \"1ff904fd-281e-4583-9b04-bd906890ec8d\") " pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.698871 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzg25"] Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.699709 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.710217 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzg25"] Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.776907 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.810750 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa77126e-20aa-40a3-a9c9-f28136f8dfac-trusted-ca\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811012 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h5n8\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-kube-api-access-4h5n8\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811049 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa77126e-20aa-40a3-a9c9-f28136f8dfac-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811093 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa77126e-20aa-40a3-a9c9-f28136f8dfac-registry-certificates\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811112 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-registry-tls\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811134 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa77126e-20aa-40a3-a9c9-f28136f8dfac-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811162 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.811188 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-bound-sa-token\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.832199 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912163 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h5n8\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-kube-api-access-4h5n8\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912227 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa77126e-20aa-40a3-a9c9-f28136f8dfac-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912264 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa77126e-20aa-40a3-a9c9-f28136f8dfac-registry-certificates\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912281 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-registry-tls\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912307 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa77126e-20aa-40a3-a9c9-f28136f8dfac-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912368 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-bound-sa-token\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.912578 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa77126e-20aa-40a3-a9c9-f28136f8dfac-trusted-ca\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.913704 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa77126e-20aa-40a3-a9c9-f28136f8dfac-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.914458 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa77126e-20aa-40a3-a9c9-f28136f8dfac-trusted-ca\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.914489 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa77126e-20aa-40a3-a9c9-f28136f8dfac-registry-certificates\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.971750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-registry-tls\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.976138 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa77126e-20aa-40a3-a9c9-f28136f8dfac-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.997794 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-bound-sa-token\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:40 crc kubenswrapper[4825]: I0122 15:29:40.998333 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h5n8\" (UniqueName: \"kubernetes.io/projected/aa77126e-20aa-40a3-a9c9-f28136f8dfac-kube-api-access-4h5n8\") pod \"image-registry-66df7c8f76-nzg25\" (UID: \"aa77126e-20aa-40a3-a9c9-f28136f8dfac\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:41 crc kubenswrapper[4825]: I0122 15:29:41.015469 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:41 crc kubenswrapper[4825]: I0122 15:29:41.042598 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7qk5"] Jan 22 15:29:41 crc kubenswrapper[4825]: W0122 15:29:41.047406 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode06cceab_9530_4e72_b66b_5d8086ea4c51.slice/crio-2506bd1fb878320a33c8540fba10d93e3a389acc5b8abffe3c7cd0688652dbe1 WatchSource:0}: Error finding container 2506bd1fb878320a33c8540fba10d93e3a389acc5b8abffe3c7cd0688652dbe1: Status 404 returned error can't find the container with id 2506bd1fb878320a33c8540fba10d93e3a389acc5b8abffe3c7cd0688652dbe1 Jan 22 15:29:41 crc kubenswrapper[4825]: I0122 15:29:41.156408 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkt56"] Jan 22 15:29:41 crc kubenswrapper[4825]: I0122 15:29:41.393684 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qk5" event={"ID":"e06cceab-9530-4e72-b66b-5d8086ea4c51","Type":"ContainerStarted","Data":"2506bd1fb878320a33c8540fba10d93e3a389acc5b8abffe3c7cd0688652dbe1"} Jan 22 15:29:41 crc kubenswrapper[4825]: I0122 15:29:41.395463 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkt56" event={"ID":"1ff904fd-281e-4583-9b04-bd906890ec8d","Type":"ContainerStarted","Data":"a68686d426b4717fce9ad5fa09c15d2bfff7f81f1e45764224c5f9b9db9bd79f"} Jan 22 15:29:41 crc kubenswrapper[4825]: I0122 15:29:41.455287 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzg25"] Jan 22 15:29:41 crc kubenswrapper[4825]: W0122 15:29:41.464234 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa77126e_20aa_40a3_a9c9_f28136f8dfac.slice/crio-b8d4e71c5ac3ba70c11bd37ca75e85209e2ffcff2bb678c46d64c630792345f5 WatchSource:0}: Error finding container b8d4e71c5ac3ba70c11bd37ca75e85209e2ffcff2bb678c46d64c630792345f5: Status 404 returned error can't find the container with id b8d4e71c5ac3ba70c11bd37ca75e85209e2ffcff2bb678c46d64c630792345f5 Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.401571 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" event={"ID":"aa77126e-20aa-40a3-a9c9-f28136f8dfac","Type":"ContainerStarted","Data":"b3ee881fbcf0ab24b1b3836a0185aa76236523badd890fd1f2a63de6cb817d8e"} Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.401766 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" event={"ID":"aa77126e-20aa-40a3-a9c9-f28136f8dfac","Type":"ContainerStarted","Data":"b8d4e71c5ac3ba70c11bd37ca75e85209e2ffcff2bb678c46d64c630792345f5"} Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.401787 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.403023 4825 generic.go:334] "Generic (PLEG): container finished" podID="e06cceab-9530-4e72-b66b-5d8086ea4c51" containerID="37d4fd40e4319ce53d7717e290cf0e301448ec08e3f8ebaffc22009889034bb0" exitCode=0 Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.403113 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qk5" event={"ID":"e06cceab-9530-4e72-b66b-5d8086ea4c51","Type":"ContainerDied","Data":"37d4fd40e4319ce53d7717e290cf0e301448ec08e3f8ebaffc22009889034bb0"} Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.404534 4825 generic.go:334] "Generic (PLEG): container finished" podID="1ff904fd-281e-4583-9b04-bd906890ec8d" containerID="9258be5a542efd7daf8a99a0f70e0a800790a12e1023dfdef344cd81981b17ef" exitCode=0 Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.404565 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkt56" event={"ID":"1ff904fd-281e-4583-9b04-bd906890ec8d","Type":"ContainerDied","Data":"9258be5a542efd7daf8a99a0f70e0a800790a12e1023dfdef344cd81981b17ef"} Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.422326 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" podStartSLOduration=2.422308437 podStartE2EDuration="2.422308437s" podCreationTimestamp="2026-01-22 15:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:29:42.421576886 +0000 UTC m=+329.183103796" watchObservedRunningTime="2026-01-22 15:29:42.422308437 +0000 UTC m=+329.183835347" Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.850774 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9dmgd"] Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.852784 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.866637 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9dmgd"] Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.951272 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18532f0-448c-4b68-a9b5-184026c8742e-catalog-content\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.951750 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5kl5\" (UniqueName: \"kubernetes.io/projected/c18532f0-448c-4b68-a9b5-184026c8742e-kube-api-access-q5kl5\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:42 crc kubenswrapper[4825]: I0122 15:29:42.952003 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18532f0-448c-4b68-a9b5-184026c8742e-utilities\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.053870 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18532f0-448c-4b68-a9b5-184026c8742e-utilities\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.053947 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18532f0-448c-4b68-a9b5-184026c8742e-catalog-content\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.054294 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18532f0-448c-4b68-a9b5-184026c8742e-utilities\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.054343 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18532f0-448c-4b68-a9b5-184026c8742e-catalog-content\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.054429 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5kl5\" (UniqueName: \"kubernetes.io/projected/c18532f0-448c-4b68-a9b5-184026c8742e-kube-api-access-q5kl5\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.072861 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5kl5\" (UniqueName: \"kubernetes.io/projected/c18532f0-448c-4b68-a9b5-184026c8742e-kube-api-access-q5kl5\") pod \"redhat-operators-9dmgd\" (UID: \"c18532f0-448c-4b68-a9b5-184026c8742e\") " pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.167518 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.423642 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerStarted","Data":"83dfdb8a7f5c205630462313c7fc1cd8b9d0215c347ddf8429409272899b9a40"} Jan 22 15:29:43 crc kubenswrapper[4825]: I0122 15:29:43.426301 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j26fp" event={"ID":"b98246b0-1146-407d-99ba-0a8a93d3af50","Type":"ContainerStarted","Data":"369a6af03fa066c19d3de8c105ba7395991212f0f17b872610fc71e05b73994f"} Jan 22 15:29:44 crc kubenswrapper[4825]: I0122 15:29:44.433049 4825 generic.go:334] "Generic (PLEG): container finished" podID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerID="83dfdb8a7f5c205630462313c7fc1cd8b9d0215c347ddf8429409272899b9a40" exitCode=0 Jan 22 15:29:44 crc kubenswrapper[4825]: I0122 15:29:44.433339 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerDied","Data":"83dfdb8a7f5c205630462313c7fc1cd8b9d0215c347ddf8429409272899b9a40"} Jan 22 15:29:44 crc kubenswrapper[4825]: I0122 15:29:44.435245 4825 generic.go:334] "Generic (PLEG): container finished" podID="b98246b0-1146-407d-99ba-0a8a93d3af50" containerID="369a6af03fa066c19d3de8c105ba7395991212f0f17b872610fc71e05b73994f" exitCode=0 Jan 22 15:29:44 crc kubenswrapper[4825]: I0122 15:29:44.435275 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j26fp" event={"ID":"b98246b0-1146-407d-99ba-0a8a93d3af50","Type":"ContainerDied","Data":"369a6af03fa066c19d3de8c105ba7395991212f0f17b872610fc71e05b73994f"} Jan 22 15:29:46 crc kubenswrapper[4825]: I0122 15:29:46.460844 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9dmgd"] Jan 22 15:29:46 crc kubenswrapper[4825]: W0122 15:29:46.464302 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc18532f0_448c_4b68_a9b5_184026c8742e.slice/crio-c859d177c8ac1f6b97bc06dd2d6de8fb719e98ea3f800753fe7558ee8dc81633 WatchSource:0}: Error finding container c859d177c8ac1f6b97bc06dd2d6de8fb719e98ea3f800753fe7558ee8dc81633: Status 404 returned error can't find the container with id c859d177c8ac1f6b97bc06dd2d6de8fb719e98ea3f800753fe7558ee8dc81633 Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.454970 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkt56" event={"ID":"1ff904fd-281e-4583-9b04-bd906890ec8d","Type":"ContainerStarted","Data":"4acad64d468c18bb014593ecb89ad8d7c139fcc73a32043b708998c99e3b8cdf"} Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.460380 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfn59" event={"ID":"c6f1ab07-8476-48f7-8969-fd7bdba2fa71","Type":"ContainerStarted","Data":"29cdeb87d086825a649cd50aea625633c6b5e8b3952a1023096aad2a88de7f76"} Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.462059 4825 generic.go:334] "Generic (PLEG): container finished" podID="c18532f0-448c-4b68-a9b5-184026c8742e" containerID="414187aa9e391ed699455cd52c3287b8b307622e428b4d890c81ebfacddf2cc1" exitCode=0 Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.462137 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dmgd" event={"ID":"c18532f0-448c-4b68-a9b5-184026c8742e","Type":"ContainerDied","Data":"414187aa9e391ed699455cd52c3287b8b307622e428b4d890c81ebfacddf2cc1"} Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.462163 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dmgd" event={"ID":"c18532f0-448c-4b68-a9b5-184026c8742e","Type":"ContainerStarted","Data":"c859d177c8ac1f6b97bc06dd2d6de8fb719e98ea3f800753fe7558ee8dc81633"} Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.464380 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh4sm" event={"ID":"8613a8ac-d68f-4ce7-b17b-ab85266760b3","Type":"ContainerStarted","Data":"d60fe09f0f04eb1289306f4666c8e4c71b0a155be0555f7b33dc8fd070732040"} Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.521040 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qh4sm" podStartSLOduration=6.697072648 podStartE2EDuration="12.521024341s" podCreationTimestamp="2026-01-22 15:29:35 +0000 UTC" firstStartedPulling="2026-01-22 15:29:38.351757741 +0000 UTC m=+325.113284661" lastFinishedPulling="2026-01-22 15:29:44.175709444 +0000 UTC m=+330.937236354" observedRunningTime="2026-01-22 15:29:47.518497668 +0000 UTC m=+334.280024578" watchObservedRunningTime="2026-01-22 15:29:47.521024341 +0000 UTC m=+334.282551251" Jan 22 15:29:47 crc kubenswrapper[4825]: I0122 15:29:47.695006 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cfn59" podStartSLOduration=4.771122667 podStartE2EDuration="12.69499015s" podCreationTimestamp="2026-01-22 15:29:35 +0000 UTC" firstStartedPulling="2026-01-22 15:29:38.341371291 +0000 UTC m=+325.102898231" lastFinishedPulling="2026-01-22 15:29:46.265238804 +0000 UTC m=+333.026765714" observedRunningTime="2026-01-22 15:29:47.691217162 +0000 UTC m=+334.452744072" watchObservedRunningTime="2026-01-22 15:29:47.69499015 +0000 UTC m=+334.456517060" Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.471598 4825 generic.go:334] "Generic (PLEG): container finished" podID="1ff904fd-281e-4583-9b04-bd906890ec8d" containerID="4acad64d468c18bb014593ecb89ad8d7c139fcc73a32043b708998c99e3b8cdf" exitCode=0 Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.471666 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkt56" event={"ID":"1ff904fd-281e-4583-9b04-bd906890ec8d","Type":"ContainerDied","Data":"4acad64d468c18bb014593ecb89ad8d7c139fcc73a32043b708998c99e3b8cdf"} Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.473402 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerStarted","Data":"22032d6cae11896c967f79aff90c28517e704b5c254785c11981dbae08d23629"} Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.475568 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j26fp" event={"ID":"b98246b0-1146-407d-99ba-0a8a93d3af50","Type":"ContainerStarted","Data":"389c752e84219ee9f2d36af6f03719c62171830f5bd00db517543eac8ca1b29b"} Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.477668 4825 generic.go:334] "Generic (PLEG): container finished" podID="e06cceab-9530-4e72-b66b-5d8086ea4c51" containerID="ea41bb1986a6b7c3eb970d3e28d0ff05dd32824cc178b9460bd130ca72ba907a" exitCode=0 Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.477693 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qk5" event={"ID":"e06cceab-9530-4e72-b66b-5d8086ea4c51","Type":"ContainerDied","Data":"ea41bb1986a6b7c3eb970d3e28d0ff05dd32824cc178b9460bd130ca72ba907a"} Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.508744 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j26fp" podStartSLOduration=3.961192118 podStartE2EDuration="11.508728486s" podCreationTimestamp="2026-01-22 15:29:37 +0000 UTC" firstStartedPulling="2026-01-22 15:29:39.363383032 +0000 UTC m=+326.124909962" lastFinishedPulling="2026-01-22 15:29:46.91091942 +0000 UTC m=+333.672446330" observedRunningTime="2026-01-22 15:29:48.504638519 +0000 UTC m=+335.266165429" watchObservedRunningTime="2026-01-22 15:29:48.508728486 +0000 UTC m=+335.270255396" Jan 22 15:29:48 crc kubenswrapper[4825]: I0122 15:29:48.547277 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4mc8" podStartSLOduration=2.972936567 podStartE2EDuration="10.547257156s" podCreationTimestamp="2026-01-22 15:29:38 +0000 UTC" firstStartedPulling="2026-01-22 15:29:39.359430398 +0000 UTC m=+326.120957318" lastFinishedPulling="2026-01-22 15:29:46.933750997 +0000 UTC m=+333.695277907" observedRunningTime="2026-01-22 15:29:48.541530481 +0000 UTC m=+335.303057401" watchObservedRunningTime="2026-01-22 15:29:48.547257156 +0000 UTC m=+335.308784076" Jan 22 15:29:49 crc kubenswrapper[4825]: I0122 15:29:49.487553 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dmgd" event={"ID":"c18532f0-448c-4b68-a9b5-184026c8742e","Type":"ContainerStarted","Data":"bc25c16aea7512b8c0ece1fdd76b9ecca72da1e58e91a9b22c1eda8f189f715f"} Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.494957 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7qk5" event={"ID":"e06cceab-9530-4e72-b66b-5d8086ea4c51","Type":"ContainerStarted","Data":"4c0e5cc8fa27d6badf265c293a1c5b093dbd5fbf1bf22d8062242826f099eb53"} Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.497121 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkt56" event={"ID":"1ff904fd-281e-4583-9b04-bd906890ec8d","Type":"ContainerStarted","Data":"8684ac4241b2a570b23be9847a4c2877f16bdfee8b11f7f8d9d430a59b0e6725"} Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.509512 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r7qk5" podStartSLOduration=3.461124635 podStartE2EDuration="10.50949905s" podCreationTimestamp="2026-01-22 15:29:40 +0000 UTC" firstStartedPulling="2026-01-22 15:29:42.40402996 +0000 UTC m=+329.165556880" lastFinishedPulling="2026-01-22 15:29:49.452404385 +0000 UTC m=+336.213931295" observedRunningTime="2026-01-22 15:29:50.508592084 +0000 UTC m=+337.270118994" watchObservedRunningTime="2026-01-22 15:29:50.50949905 +0000 UTC m=+337.271025960" Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.532799 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fkt56" podStartSLOduration=3.464184063 podStartE2EDuration="10.53278502s" podCreationTimestamp="2026-01-22 15:29:40 +0000 UTC" firstStartedPulling="2026-01-22 15:29:42.40681022 +0000 UTC m=+329.168337130" lastFinishedPulling="2026-01-22 15:29:49.475411167 +0000 UTC m=+336.236938087" observedRunningTime="2026-01-22 15:29:50.528517947 +0000 UTC m=+337.290044867" watchObservedRunningTime="2026-01-22 15:29:50.53278502 +0000 UTC m=+337.294311930" Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.556678 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.556726 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.777702 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:50 crc kubenswrapper[4825]: I0122 15:29:50.777749 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:29:51 crc kubenswrapper[4825]: I0122 15:29:51.503174 4825 generic.go:334] "Generic (PLEG): container finished" podID="c18532f0-448c-4b68-a9b5-184026c8742e" containerID="bc25c16aea7512b8c0ece1fdd76b9ecca72da1e58e91a9b22c1eda8f189f715f" exitCode=0 Jan 22 15:29:51 crc kubenswrapper[4825]: I0122 15:29:51.503245 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dmgd" event={"ID":"c18532f0-448c-4b68-a9b5-184026c8742e","Type":"ContainerDied","Data":"bc25c16aea7512b8c0ece1fdd76b9ecca72da1e58e91a9b22c1eda8f189f715f"} Jan 22 15:29:51 crc kubenswrapper[4825]: I0122 15:29:51.625876 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-r7qk5" podUID="e06cceab-9530-4e72-b66b-5d8086ea4c51" containerName="registry-server" probeResult="failure" output=< Jan 22 15:29:51 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:29:51 crc kubenswrapper[4825]: > Jan 22 15:29:51 crc kubenswrapper[4825]: I0122 15:29:51.822312 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fkt56" podUID="1ff904fd-281e-4583-9b04-bd906890ec8d" containerName="registry-server" probeResult="failure" output=< Jan 22 15:29:51 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:29:51 crc kubenswrapper[4825]: > Jan 22 15:29:53 crc kubenswrapper[4825]: I0122 15:29:53.525627 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dmgd" event={"ID":"c18532f0-448c-4b68-a9b5-184026c8742e","Type":"ContainerStarted","Data":"35579c93d8bf0a224779f18fe69b5d167cc8490f4d8399338da8d8a7f3a7aff4"} Jan 22 15:29:53 crc kubenswrapper[4825]: I0122 15:29:53.545296 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9dmgd" podStartSLOduration=6.726457588 podStartE2EDuration="11.54527675s" podCreationTimestamp="2026-01-22 15:29:42 +0000 UTC" firstStartedPulling="2026-01-22 15:29:47.477620111 +0000 UTC m=+334.239147021" lastFinishedPulling="2026-01-22 15:29:52.296439283 +0000 UTC m=+339.057966183" observedRunningTime="2026-01-22 15:29:53.54352988 +0000 UTC m=+340.305056790" watchObservedRunningTime="2026-01-22 15:29:53.54527675 +0000 UTC m=+340.306803660" Jan 22 15:29:55 crc kubenswrapper[4825]: I0122 15:29:55.751533 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:55 crc kubenswrapper[4825]: I0122 15:29:55.752509 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:55 crc kubenswrapper[4825]: I0122 15:29:55.824835 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:55 crc kubenswrapper[4825]: I0122 15:29:55.947216 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:55 crc kubenswrapper[4825]: I0122 15:29:55.947387 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:55 crc kubenswrapper[4825]: I0122 15:29:55.986423 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:56 crc kubenswrapper[4825]: I0122 15:29:56.577613 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qh4sm" Jan 22 15:29:56 crc kubenswrapper[4825]: I0122 15:29:56.580072 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cfn59" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.242502 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.243143 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.286841 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.392110 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.392440 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.444341 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.626072 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:29:58 crc kubenswrapper[4825]: I0122 15:29:58.626584 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j26fp" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.177855 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n"] Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.178700 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.183306 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.183786 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.186520 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n"] Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.284312 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrdv\" (UniqueName: \"kubernetes.io/projected/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-kube-api-access-pmrdv\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.284381 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-config-volume\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.284419 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-secret-volume\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.385035 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrdv\" (UniqueName: \"kubernetes.io/projected/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-kube-api-access-pmrdv\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.385308 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-config-volume\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.385424 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-secret-volume\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.386079 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-config-volume\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.391373 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-secret-volume\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.400572 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrdv\" (UniqueName: \"kubernetes.io/projected/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-kube-api-access-pmrdv\") pod \"collect-profiles-29484930-xdl2n\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.497304 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.612767 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.659463 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r7qk5" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.815173 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.855567 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fkt56" Jan 22 15:30:00 crc kubenswrapper[4825]: I0122 15:30:00.917150 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n"] Jan 22 15:30:01 crc kubenswrapper[4825]: I0122 15:30:01.020729 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nzg25" Jan 22 15:30:01 crc kubenswrapper[4825]: I0122 15:30:01.069229 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9s9f"] Jan 22 15:30:01 crc kubenswrapper[4825]: I0122 15:30:01.572575 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" event={"ID":"da621d4b-84ff-4f2b-a8bf-db16fa054f4e","Type":"ContainerStarted","Data":"18e05ff392b04c9405f7b7cd697ef980aee9fe3f5f0bda25a1bb830d87270ab7"} Jan 22 15:30:03 crc kubenswrapper[4825]: I0122 15:30:03.167966 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:30:03 crc kubenswrapper[4825]: I0122 15:30:03.168294 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:30:03 crc kubenswrapper[4825]: I0122 15:30:03.217391 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:30:03 crc kubenswrapper[4825]: I0122 15:30:03.588650 4825 generic.go:334] "Generic (PLEG): container finished" podID="da621d4b-84ff-4f2b-a8bf-db16fa054f4e" containerID="d467e36b7b1faefb9f88e8bb60813f3e0e222f49ca7798b5f0ee66c5bd195fda" exitCode=0 Jan 22 15:30:03 crc kubenswrapper[4825]: I0122 15:30:03.588878 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" event={"ID":"da621d4b-84ff-4f2b-a8bf-db16fa054f4e","Type":"ContainerDied","Data":"d467e36b7b1faefb9f88e8bb60813f3e0e222f49ca7798b5f0ee66c5bd195fda"} Jan 22 15:30:03 crc kubenswrapper[4825]: I0122 15:30:03.628395 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9dmgd" Jan 22 15:30:04 crc kubenswrapper[4825]: I0122 15:30:04.957539 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.079061 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmrdv\" (UniqueName: \"kubernetes.io/projected/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-kube-api-access-pmrdv\") pod \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.079130 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-config-volume\") pod \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.079170 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-secret-volume\") pod \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\" (UID: \"da621d4b-84ff-4f2b-a8bf-db16fa054f4e\") " Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.079794 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "da621d4b-84ff-4f2b-a8bf-db16fa054f4e" (UID: "da621d4b-84ff-4f2b-a8bf-db16fa054f4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.083679 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da621d4b-84ff-4f2b-a8bf-db16fa054f4e" (UID: "da621d4b-84ff-4f2b-a8bf-db16fa054f4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.083770 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-kube-api-access-pmrdv" (OuterVolumeSpecName: "kube-api-access-pmrdv") pod "da621d4b-84ff-4f2b-a8bf-db16fa054f4e" (UID: "da621d4b-84ff-4f2b-a8bf-db16fa054f4e"). InnerVolumeSpecName "kube-api-access-pmrdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.180647 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmrdv\" (UniqueName: \"kubernetes.io/projected/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-kube-api-access-pmrdv\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.180690 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.180700 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da621d4b-84ff-4f2b-a8bf-db16fa054f4e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.541346 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.541508 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.602738 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" event={"ID":"da621d4b-84ff-4f2b-a8bf-db16fa054f4e","Type":"ContainerDied","Data":"18e05ff392b04c9405f7b7cd697ef980aee9fe3f5f0bda25a1bb830d87270ab7"} Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.603359 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e05ff392b04c9405f7b7cd697ef980aee9fe3f5f0bda25a1bb830d87270ab7" Jan 22 15:30:05 crc kubenswrapper[4825]: I0122 15:30:05.602813 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.112496 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" podUID="b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" containerName="registry" containerID="cri-o://632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421" gracePeriod=30 Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.509870 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.575782 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-tls\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.575875 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-bound-sa-token\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.575921 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-installation-pull-secrets\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.575959 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-certificates\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.576011 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-ca-trust-extracted\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.576050 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-trusted-ca\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.576105 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68gcj\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-kube-api-access-68gcj\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.576232 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\" (UID: \"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351\") " Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.577484 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.577846 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.582913 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-kube-api-access-68gcj" (OuterVolumeSpecName: "kube-api-access-68gcj") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "kube-api-access-68gcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.583186 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.583503 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.585460 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.587653 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.599313 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" (UID: "b7fe40a8-ddaa-42e0-af67-c0e8f88f0351"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678316 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678377 4825 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678401 4825 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678418 4825 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678438 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678454 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68gcj\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-kube-api-access-68gcj\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.678471 4825 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.800204 4825 generic.go:334] "Generic (PLEG): container finished" podID="b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" containerID="632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421" exitCode=0 Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.800270 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" event={"ID":"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351","Type":"ContainerDied","Data":"632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421"} Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.800286 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.800315 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9s9f" event={"ID":"b7fe40a8-ddaa-42e0-af67-c0e8f88f0351","Type":"ContainerDied","Data":"22e09198b3f77ff1dcbf4aa4cf13e87d0504343ee0d69e759758a30da0539363"} Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.800363 4825 scope.go:117] "RemoveContainer" containerID="632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.827391 4825 scope.go:117] "RemoveContainer" containerID="632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421" Jan 22 15:30:26 crc kubenswrapper[4825]: E0122 15:30:26.828230 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421\": container with ID starting with 632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421 not found: ID does not exist" containerID="632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.828307 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421"} err="failed to get container status \"632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421\": rpc error: code = NotFound desc = could not find container \"632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421\": container with ID starting with 632bdc04edf2b5f2e2fefa31a8faa622df65e094aba5d895a2b676c8cf843421 not found: ID does not exist" Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.954844 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9s9f"] Jan 22 15:30:26 crc kubenswrapper[4825]: I0122 15:30:26.958678 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9s9f"] Jan 22 15:30:27 crc kubenswrapper[4825]: I0122 15:30:27.530061 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" path="/var/lib/kubelet/pods/b7fe40a8-ddaa-42e0-af67-c0e8f88f0351/volumes" Jan 22 15:30:35 crc kubenswrapper[4825]: I0122 15:30:35.541679 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:30:35 crc kubenswrapper[4825]: I0122 15:30:35.542298 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:31:05 crc kubenswrapper[4825]: I0122 15:31:05.542117 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:31:05 crc kubenswrapper[4825]: I0122 15:31:05.542625 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:31:05 crc kubenswrapper[4825]: I0122 15:31:05.542691 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:31:05 crc kubenswrapper[4825]: I0122 15:31:05.543376 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70136c7cc46f39bc356a97e0057511092c22deb2e74a289548614a289b601d0b"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:31:05 crc kubenswrapper[4825]: I0122 15:31:05.543442 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://70136c7cc46f39bc356a97e0057511092c22deb2e74a289548614a289b601d0b" gracePeriod=600 Jan 22 15:31:06 crc kubenswrapper[4825]: I0122 15:31:06.104504 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="70136c7cc46f39bc356a97e0057511092c22deb2e74a289548614a289b601d0b" exitCode=0 Jan 22 15:31:06 crc kubenswrapper[4825]: I0122 15:31:06.104595 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"70136c7cc46f39bc356a97e0057511092c22deb2e74a289548614a289b601d0b"} Jan 22 15:31:06 crc kubenswrapper[4825]: I0122 15:31:06.105434 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"8c25a004991eab3ed81c43c73a2fdebc13cc5dbf35ee92f9f96732e04ec4d469"} Jan 22 15:31:06 crc kubenswrapper[4825]: I0122 15:31:06.105464 4825 scope.go:117] "RemoveContainer" containerID="fd64b180201cf5206a8d92a0da09535af20c70d8597a94001c2491eaa1778b42" Jan 22 15:32:13 crc kubenswrapper[4825]: I0122 15:32:13.878886 4825 scope.go:117] "RemoveContainer" containerID="da163c98ade44996898e0e4a4560fb51ab37ddfd1522dc2adf50275c39cc06fc" Jan 22 15:32:13 crc kubenswrapper[4825]: I0122 15:32:13.913292 4825 scope.go:117] "RemoveContainer" containerID="78995e6260e6066a0e3a09656206ae1c0e4a7cffdcdf6ee0c7f8b4b74361b63f" Jan 22 15:33:05 crc kubenswrapper[4825]: I0122 15:33:05.541422 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:33:05 crc kubenswrapper[4825]: I0122 15:33:05.542055 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:33:13 crc kubenswrapper[4825]: I0122 15:33:13.950179 4825 scope.go:117] "RemoveContainer" containerID="d03a30adc290233b767a819884fe622e42d1c9ec30410399eaad7e8502c0a1ca" Jan 22 15:33:13 crc kubenswrapper[4825]: I0122 15:33:13.983069 4825 scope.go:117] "RemoveContainer" containerID="f2f7fb533441e31155fc3041f5725dd98fd4782f8057ec6dc82a1f789323dddc" Jan 22 15:33:35 crc kubenswrapper[4825]: I0122 15:33:35.541606 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:33:35 crc kubenswrapper[4825]: I0122 15:33:35.542397 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:33:50 crc kubenswrapper[4825]: I0122 15:33:50.988667 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn"] Jan 22 15:33:50 crc kubenswrapper[4825]: E0122 15:33:50.989533 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da621d4b-84ff-4f2b-a8bf-db16fa054f4e" containerName="collect-profiles" Jan 22 15:33:50 crc kubenswrapper[4825]: I0122 15:33:50.989552 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="da621d4b-84ff-4f2b-a8bf-db16fa054f4e" containerName="collect-profiles" Jan 22 15:33:50 crc kubenswrapper[4825]: E0122 15:33:50.989573 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" containerName="registry" Jan 22 15:33:50 crc kubenswrapper[4825]: I0122 15:33:50.989581 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" containerName="registry" Jan 22 15:33:50 crc kubenswrapper[4825]: I0122 15:33:50.989747 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fe40a8-ddaa-42e0-af67-c0e8f88f0351" containerName="registry" Jan 22 15:33:50 crc kubenswrapper[4825]: I0122 15:33:50.989769 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="da621d4b-84ff-4f2b-a8bf-db16fa054f4e" containerName="collect-profiles" Jan 22 15:33:50 crc kubenswrapper[4825]: I0122 15:33:50.990828 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.000729 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.051252 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqh7\" (UniqueName: \"kubernetes.io/projected/ddaef815-cdc9-496c-84b6-854d4d626f48-kube-api-access-zlqh7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.051373 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.051411 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.059401 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn"] Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.152262 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.152318 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.152358 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqh7\" (UniqueName: \"kubernetes.io/projected/ddaef815-cdc9-496c-84b6-854d4d626f48-kube-api-access-zlqh7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.152773 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.152810 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.170849 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqh7\" (UniqueName: \"kubernetes.io/projected/ddaef815-cdc9-496c-84b6-854d4d626f48-kube-api-access-zlqh7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.309478 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:33:51 crc kubenswrapper[4825]: I0122 15:33:51.745199 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn"] Jan 22 15:33:51 crc kubenswrapper[4825]: W0122 15:33:51.755594 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddaef815_cdc9_496c_84b6_854d4d626f48.slice/crio-ed6c68afda2f1ebfb4e8096290571c49b51c7bce08d5f72ea506ba10a7d34ed6 WatchSource:0}: Error finding container ed6c68afda2f1ebfb4e8096290571c49b51c7bce08d5f72ea506ba10a7d34ed6: Status 404 returned error can't find the container with id ed6c68afda2f1ebfb4e8096290571c49b51c7bce08d5f72ea506ba10a7d34ed6 Jan 22 15:33:52 crc kubenswrapper[4825]: I0122 15:33:52.381774 4825 generic.go:334] "Generic (PLEG): container finished" podID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerID="f43fd2b6c0925df280d5b102f60a7d280c30919cfa3cc00dcfd6f933417836e9" exitCode=0 Jan 22 15:33:52 crc kubenswrapper[4825]: I0122 15:33:52.381988 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" event={"ID":"ddaef815-cdc9-496c-84b6-854d4d626f48","Type":"ContainerDied","Data":"f43fd2b6c0925df280d5b102f60a7d280c30919cfa3cc00dcfd6f933417836e9"} Jan 22 15:33:52 crc kubenswrapper[4825]: I0122 15:33:52.382013 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" event={"ID":"ddaef815-cdc9-496c-84b6-854d4d626f48","Type":"ContainerStarted","Data":"ed6c68afda2f1ebfb4e8096290571c49b51c7bce08d5f72ea506ba10a7d34ed6"} Jan 22 15:33:52 crc kubenswrapper[4825]: I0122 15:33:52.383361 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.769722 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c8f2b"] Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770488 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-controller" containerID="cri-o://eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770596 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="northd" containerID="cri-o://5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770590 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="nbdb" containerID="cri-o://255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770640 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-node" containerID="cri-o://21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770670 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-acl-logging" containerID="cri-o://282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770811 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="sbdb" containerID="cri-o://a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.770631 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" gracePeriod=30 Jan 22 15:34:01 crc kubenswrapper[4825]: I0122 15:34:01.838173 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" containerID="cri-o://27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" gracePeriod=30 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.110404 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/2.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.112862 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovn-acl-logging/0.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.113438 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovn-controller/0.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.114927 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260227 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovn-node-metrics-cert\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260280 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-script-lib\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260307 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-var-lib-openvswitch\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260329 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm2tb\" (UniqueName: \"kubernetes.io/projected/a2a796f1-0c22-4a59-a525-e426ecf221bc-kube-api-access-mm2tb\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260374 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-ovn-kubernetes\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260396 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260541 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.260741 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261243 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-openvswitch\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261266 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-etc-openvswitch\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261297 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-netns\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261328 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-config\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261351 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-netd\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261368 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-systemd\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261388 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261405 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-slash\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261422 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-systemd-units\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261442 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-ovn\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261463 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-bin\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261486 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-kubelet\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261500 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-node-log\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261515 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-env-overrides\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261530 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-log-socket\") pod \"a2a796f1-0c22-4a59-a525-e426ecf221bc\" (UID: \"a2a796f1-0c22-4a59-a525-e426ecf221bc\") " Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261749 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261764 4825 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261773 4825 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261803 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-log-socket" (OuterVolumeSpecName: "log-socket") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261832 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261831 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261863 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261898 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261923 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261949 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.261974 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.262017 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-slash" (OuterVolumeSpecName: "host-slash") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.262042 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.262064 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.262088 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.262124 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-node-log" (OuterVolumeSpecName: "node-log") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.262383 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.265927 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a796f1-0c22-4a59-a525-e426ecf221bc-kube-api-access-mm2tb" (OuterVolumeSpecName: "kube-api-access-mm2tb") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "kube-api-access-mm2tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.266716 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269571 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lnb8w"] Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269778 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="northd" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269796 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="northd" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269811 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269822 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269830 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="nbdb" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269842 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="nbdb" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269853 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-acl-logging" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269858 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-acl-logging" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269866 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="sbdb" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269872 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="sbdb" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269879 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269886 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269895 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269901 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269910 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269916 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269925 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269932 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269940 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kubecfg-setup" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269946 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kubecfg-setup" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269951 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-node" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269957 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-node" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.269964 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.269969 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270089 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270099 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="northd" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270107 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270114 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-node" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270123 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270131 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="sbdb" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270138 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovn-acl-logging" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270146 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270153 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="nbdb" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270161 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.270351 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerName="ovnkube-controller" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.271913 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.275704 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a2a796f1-0c22-4a59-a525-e426ecf221bc" (UID: "a2a796f1-0c22-4a59-a525-e426ecf221bc"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363283 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-run-ovn-kubernetes\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363334 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-run-netns\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363357 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-kubelet\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363377 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-node-log\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363398 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovnkube-script-lib\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363421 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-slash\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363436 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-systemd\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363713 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363849 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-var-lib-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363903 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovnkube-config\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.363945 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-cni-netd\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364043 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-env-overrides\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364116 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364191 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-systemd-units\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364243 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-ovn\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364276 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-cni-bin\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364348 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovn-node-metrics-cert\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364402 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6l2\" (UniqueName: \"kubernetes.io/projected/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-kube-api-access-wm6l2\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364444 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-log-socket\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364491 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-etc-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364596 4825 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364624 4825 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364643 4825 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364663 4825 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364683 4825 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364699 4825 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364715 4825 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364747 4825 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364763 4825 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364782 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364800 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm2tb\" (UniqueName: \"kubernetes.io/projected/a2a796f1-0c22-4a59-a525-e426ecf221bc-kube-api-access-mm2tb\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364818 4825 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364834 4825 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364853 4825 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364870 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2a796f1-0c22-4a59-a525-e426ecf221bc-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364886 4825 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.364906 4825 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2a796f1-0c22-4a59-a525-e426ecf221bc-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.446439 4825 generic.go:334] "Generic (PLEG): container finished" podID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerID="ecc0dbc926572207d4bb6fb85dfbf081500eab173a21ff0f608022c57581b02c" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.446547 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" event={"ID":"ddaef815-cdc9-496c-84b6-854d4d626f48","Type":"ContainerDied","Data":"ecc0dbc926572207d4bb6fb85dfbf081500eab173a21ff0f608022c57581b02c"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.449214 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovnkube-controller/2.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.452589 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovn-acl-logging/0.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453089 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c8f2b_a2a796f1-0c22-4a59-a525-e426ecf221bc/ovn-controller/0.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453531 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453564 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453573 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453584 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453593 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453601 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" exitCode=0 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453609 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" exitCode=143 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453617 4825 generic.go:334] "Generic (PLEG): container finished" podID="a2a796f1-0c22-4a59-a525-e426ecf221bc" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" exitCode=143 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453623 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453610 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453759 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453817 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453899 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453821 4825 scope.go:117] "RemoveContainer" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.453951 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454019 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454036 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454051 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454060 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454069 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454076 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454083 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454090 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454098 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454104 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454115 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454127 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454135 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454142 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454149 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454155 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.454162 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462128 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462185 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462194 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462202 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462221 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462242 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462252 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462260 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462268 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462276 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462284 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462292 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462299 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462317 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462325 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462336 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8f2b" event={"ID":"a2a796f1-0c22-4a59-a525-e426ecf221bc","Type":"ContainerDied","Data":"4ff920ac3bcec2a0f6c60e684001728a74092e9ad118eebab33e48b7caafb953"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462349 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462358 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462366 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462375 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462397 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462410 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462417 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462428 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462436 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.462443 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.474160 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ljkjt_049abb37-810d-475f-b042-bceb43e81dd5/kube-multus/1.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.476864 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ljkjt_049abb37-810d-475f-b042-bceb43e81dd5/kube-multus/0.log" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.476899 4825 generic.go:334] "Generic (PLEG): container finished" podID="049abb37-810d-475f-b042-bceb43e81dd5" containerID="f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56" exitCode=2 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.476955 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ljkjt" event={"ID":"049abb37-810d-475f-b042-bceb43e81dd5","Type":"ContainerDied","Data":"f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.476997 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f"} Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477033 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477112 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-systemd-units\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477139 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-cni-bin\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477168 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-ovn\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477241 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovn-node-metrics-cert\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477358 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-cni-bin\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477430 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-systemd-units\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477487 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477550 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6l2\" (UniqueName: \"kubernetes.io/projected/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-kube-api-access-wm6l2\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477669 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-ovn\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.477721 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-log-socket\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478049 4825 scope.go:117] "RemoveContainer" containerID="f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478195 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-etc-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478197 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-log-socket\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.478438 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ljkjt_openshift-multus(049abb37-810d-475f-b042-bceb43e81dd5)\"" pod="openshift-multus/multus-ljkjt" podUID="049abb37-810d-475f-b042-bceb43e81dd5" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478738 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-run-ovn-kubernetes\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478785 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-run-netns\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478834 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-kubelet\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478836 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-etc-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478858 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-node-log\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.478996 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-kubelet\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479001 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-node-log\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479019 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-run-ovn-kubernetes\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479039 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-run-netns\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479058 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovnkube-script-lib\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479225 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-slash\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479445 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-systemd\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479473 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-var-lib-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479519 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovnkube-config\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479568 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-cni-netd\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479619 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-cni-netd\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479631 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-host-slash\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479677 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-env-overrides\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-systemd\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479769 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-run-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.479788 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-var-lib-openvswitch\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.480021 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovnkube-script-lib\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.480304 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovnkube-config\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.486618 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-env-overrides\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.488646 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-ovn-node-metrics-cert\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.499922 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.505343 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6l2\" (UniqueName: \"kubernetes.io/projected/8f03cd27-76a9-43cd-bb8a-f19daba43f4a-kube-api-access-wm6l2\") pod \"ovnkube-node-lnb8w\" (UID: \"8f03cd27-76a9-43cd-bb8a-f19daba43f4a\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.533832 4825 scope.go:117] "RemoveContainer" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.539328 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c8f2b"] Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.559231 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c8f2b"] Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.580446 4825 scope.go:117] "RemoveContainer" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.586670 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.595578 4825 scope.go:117] "RemoveContainer" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.620431 4825 scope.go:117] "RemoveContainer" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" Jan 22 15:34:02 crc kubenswrapper[4825]: W0122 15:34:02.626917 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f03cd27_76a9_43cd_bb8a_f19daba43f4a.slice/crio-397dc465a1df72a5d24f57c484530f41553e29fb79145fbb116f55c0d5df85c8 WatchSource:0}: Error finding container 397dc465a1df72a5d24f57c484530f41553e29fb79145fbb116f55c0d5df85c8: Status 404 returned error can't find the container with id 397dc465a1df72a5d24f57c484530f41553e29fb79145fbb116f55c0d5df85c8 Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.655687 4825 scope.go:117] "RemoveContainer" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.689107 4825 scope.go:117] "RemoveContainer" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.709411 4825 scope.go:117] "RemoveContainer" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.788566 4825 scope.go:117] "RemoveContainer" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.809121 4825 scope.go:117] "RemoveContainer" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.809798 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": container with ID starting with 27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924 not found: ID does not exist" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.809836 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} err="failed to get container status \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": rpc error: code = NotFound desc = could not find container \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": container with ID starting with 27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.809863 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.810203 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": container with ID starting with 14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e not found: ID does not exist" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.810267 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} err="failed to get container status \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": rpc error: code = NotFound desc = could not find container \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": container with ID starting with 14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.810291 4825 scope.go:117] "RemoveContainer" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.811308 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": container with ID starting with a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20 not found: ID does not exist" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.811346 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} err="failed to get container status \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": rpc error: code = NotFound desc = could not find container \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": container with ID starting with a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.811368 4825 scope.go:117] "RemoveContainer" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.811748 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": container with ID starting with 255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275 not found: ID does not exist" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.811770 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} err="failed to get container status \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": rpc error: code = NotFound desc = could not find container \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": container with ID starting with 255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.811784 4825 scope.go:117] "RemoveContainer" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.812196 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": container with ID starting with 5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d not found: ID does not exist" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.812419 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} err="failed to get container status \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": rpc error: code = NotFound desc = could not find container \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": container with ID starting with 5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.812441 4825 scope.go:117] "RemoveContainer" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.814174 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": container with ID starting with cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932 not found: ID does not exist" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.814204 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} err="failed to get container status \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": rpc error: code = NotFound desc = could not find container \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": container with ID starting with cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.814226 4825 scope.go:117] "RemoveContainer" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.814630 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": container with ID starting with 21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca not found: ID does not exist" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.814657 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} err="failed to get container status \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": rpc error: code = NotFound desc = could not find container \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": container with ID starting with 21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.814675 4825 scope.go:117] "RemoveContainer" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.819079 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": container with ID starting with 282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908 not found: ID does not exist" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.819104 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} err="failed to get container status \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": rpc error: code = NotFound desc = could not find container \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": container with ID starting with 282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.819127 4825 scope.go:117] "RemoveContainer" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.819475 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": container with ID starting with eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6 not found: ID does not exist" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.819504 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} err="failed to get container status \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": rpc error: code = NotFound desc = could not find container \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": container with ID starting with eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.819521 4825 scope.go:117] "RemoveContainer" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.819736 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": container with ID starting with afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91 not found: ID does not exist" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.819764 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} err="failed to get container status \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": rpc error: code = NotFound desc = could not find container \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": container with ID starting with afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.819781 4825 scope.go:117] "RemoveContainer" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820066 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} err="failed to get container status \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": rpc error: code = NotFound desc = could not find container \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": container with ID starting with 27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820091 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820258 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} err="failed to get container status \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": rpc error: code = NotFound desc = could not find container \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": container with ID starting with 14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820279 4825 scope.go:117] "RemoveContainer" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820578 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} err="failed to get container status \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": rpc error: code = NotFound desc = could not find container \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": container with ID starting with a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820601 4825 scope.go:117] "RemoveContainer" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820887 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} err="failed to get container status \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": rpc error: code = NotFound desc = could not find container \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": container with ID starting with 255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.820910 4825 scope.go:117] "RemoveContainer" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821115 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} err="failed to get container status \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": rpc error: code = NotFound desc = could not find container \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": container with ID starting with 5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821139 4825 scope.go:117] "RemoveContainer" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821287 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} err="failed to get container status \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": rpc error: code = NotFound desc = could not find container \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": container with ID starting with cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821303 4825 scope.go:117] "RemoveContainer" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821436 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} err="failed to get container status \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": rpc error: code = NotFound desc = could not find container \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": container with ID starting with 21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821457 4825 scope.go:117] "RemoveContainer" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821581 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} err="failed to get container status \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": rpc error: code = NotFound desc = could not find container \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": container with ID starting with 282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821597 4825 scope.go:117] "RemoveContainer" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821869 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} err="failed to get container status \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": rpc error: code = NotFound desc = could not find container \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": container with ID starting with eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.821885 4825 scope.go:117] "RemoveContainer" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822039 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} err="failed to get container status \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": rpc error: code = NotFound desc = could not find container \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": container with ID starting with afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822053 4825 scope.go:117] "RemoveContainer" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822187 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} err="failed to get container status \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": rpc error: code = NotFound desc = could not find container \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": container with ID starting with 27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822206 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822335 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} err="failed to get container status \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": rpc error: code = NotFound desc = could not find container \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": container with ID starting with 14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822353 4825 scope.go:117] "RemoveContainer" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822715 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} err="failed to get container status \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": rpc error: code = NotFound desc = could not find container \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": container with ID starting with a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822736 4825 scope.go:117] "RemoveContainer" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822919 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} err="failed to get container status \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": rpc error: code = NotFound desc = could not find container \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": container with ID starting with 255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.822943 4825 scope.go:117] "RemoveContainer" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.824029 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} err="failed to get container status \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": rpc error: code = NotFound desc = could not find container \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": container with ID starting with 5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.824070 4825 scope.go:117] "RemoveContainer" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.824868 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} err="failed to get container status \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": rpc error: code = NotFound desc = could not find container \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": container with ID starting with cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.824889 4825 scope.go:117] "RemoveContainer" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825131 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} err="failed to get container status \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": rpc error: code = NotFound desc = could not find container \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": container with ID starting with 21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825149 4825 scope.go:117] "RemoveContainer" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825416 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} err="failed to get container status \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": rpc error: code = NotFound desc = could not find container \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": container with ID starting with 282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825443 4825 scope.go:117] "RemoveContainer" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825694 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} err="failed to get container status \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": rpc error: code = NotFound desc = could not find container \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": container with ID starting with eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825725 4825 scope.go:117] "RemoveContainer" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825916 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} err="failed to get container status \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": rpc error: code = NotFound desc = could not find container \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": container with ID starting with afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.825936 4825 scope.go:117] "RemoveContainer" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826192 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} err="failed to get container status \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": rpc error: code = NotFound desc = could not find container \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": container with ID starting with 27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826216 4825 scope.go:117] "RemoveContainer" containerID="14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826418 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e"} err="failed to get container status \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": rpc error: code = NotFound desc = could not find container \"14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e\": container with ID starting with 14335d49d94b197b940afdc2894166703a9b94b58de33fb951744d0b24bb057e not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826435 4825 scope.go:117] "RemoveContainer" containerID="a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826597 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20"} err="failed to get container status \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": rpc error: code = NotFound desc = could not find container \"a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20\": container with ID starting with a6384c8f5ab7be5762aab8871cde70ea46e0b5d199620852735ac5d9bcdb7a20 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826613 4825 scope.go:117] "RemoveContainer" containerID="255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826835 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275"} err="failed to get container status \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": rpc error: code = NotFound desc = could not find container \"255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275\": container with ID starting with 255b2fdabda22223b5195500936136bb8dc4ff7f2f5b34dc5baa6aae7555f275 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.826852 4825 scope.go:117] "RemoveContainer" containerID="5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827038 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d"} err="failed to get container status \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": rpc error: code = NotFound desc = could not find container \"5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d\": container with ID starting with 5d0b528603025b36bd21ae39d877650c80f44511b60eea7dab263fed328cf68d not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827053 4825 scope.go:117] "RemoveContainer" containerID="cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827229 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932"} err="failed to get container status \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": rpc error: code = NotFound desc = could not find container \"cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932\": container with ID starting with cf2fe7966da1ca29264758ec6d17e71c06318dc641205ab1ace42af14759e932 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827246 4825 scope.go:117] "RemoveContainer" containerID="21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827487 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca"} err="failed to get container status \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": rpc error: code = NotFound desc = could not find container \"21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca\": container with ID starting with 21f693010206c7f846b9157b6380c353060393c4b3cd1f5e1fe9aabf9f7a6fca not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827502 4825 scope.go:117] "RemoveContainer" containerID="282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827665 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908"} err="failed to get container status \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": rpc error: code = NotFound desc = could not find container \"282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908\": container with ID starting with 282db4144553cfe1b9523f7671157d1e56bfc8ac9c220c7dd3b6fa704037b908 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827680 4825 scope.go:117] "RemoveContainer" containerID="eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827833 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6"} err="failed to get container status \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": rpc error: code = NotFound desc = could not find container \"eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6\": container with ID starting with eba5b1eef122f77107cba36b92d842af1853335f4dc39cc88b04e13d86a8f6d6 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.827850 4825 scope.go:117] "RemoveContainer" containerID="afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.828008 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91"} err="failed to get container status \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": rpc error: code = NotFound desc = could not find container \"afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91\": container with ID starting with afd46509b701e1993e662a2faa908ea86a0a2d8ef3fa65e1465f4f013483cb91 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.828025 4825 scope.go:117] "RemoveContainer" containerID="27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924" Jan 22 15:34:02 crc kubenswrapper[4825]: I0122 15:34:02.828191 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924"} err="failed to get container status \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": rpc error: code = NotFound desc = could not find container \"27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924\": container with ID starting with 27c966a6d303ad10c643f83ed18da1db5307dfc653f2fc8bba5403db0f8c9924 not found: ID does not exist" Jan 22 15:34:02 crc kubenswrapper[4825]: E0122 15:34:02.906663 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f03cd27_76a9_43cd_bb8a_f19daba43f4a.slice/crio-conmon-97c141aa704c74d11d801facefeb8838a17870bc7a42cff1dbf18a2c76cb59e0.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:34:03 crc kubenswrapper[4825]: I0122 15:34:03.483896 4825 generic.go:334] "Generic (PLEG): container finished" podID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerID="91db1ef8c363b909128169d56806e1bc08235072cc1bfee5f9f1bcd63af2b89e" exitCode=0 Jan 22 15:34:03 crc kubenswrapper[4825]: I0122 15:34:03.483969 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" event={"ID":"ddaef815-cdc9-496c-84b6-854d4d626f48","Type":"ContainerDied","Data":"91db1ef8c363b909128169d56806e1bc08235072cc1bfee5f9f1bcd63af2b89e"} Jan 22 15:34:03 crc kubenswrapper[4825]: I0122 15:34:03.487001 4825 generic.go:334] "Generic (PLEG): container finished" podID="8f03cd27-76a9-43cd-bb8a-f19daba43f4a" containerID="97c141aa704c74d11d801facefeb8838a17870bc7a42cff1dbf18a2c76cb59e0" exitCode=0 Jan 22 15:34:03 crc kubenswrapper[4825]: I0122 15:34:03.487030 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerDied","Data":"97c141aa704c74d11d801facefeb8838a17870bc7a42cff1dbf18a2c76cb59e0"} Jan 22 15:34:03 crc kubenswrapper[4825]: I0122 15:34:03.487051 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"397dc465a1df72a5d24f57c484530f41553e29fb79145fbb116f55c0d5df85c8"} Jan 22 15:34:03 crc kubenswrapper[4825]: I0122 15:34:03.526415 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a796f1-0c22-4a59-a525-e426ecf221bc" path="/var/lib/kubelet/pods/a2a796f1-0c22-4a59-a525-e426ecf221bc/volumes" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.500456 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"d382ed4582c0dd7aab61439949a43a62872f8b37bbc1fc23eae781500f9c87fc"} Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.501038 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"3fc470ddca72b06023ff6b0739c83923d54141f30b05453f8e4e08583d58eddb"} Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.501058 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"29989017ed217795e5c8d8a03d481889d9ae3da6f8528df923ceb421f9846202"} Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.501074 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"52d1a46acd15172863318d1f0f08ec2e81751f22d87e4cf881a43c67f89b1748"} Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.501092 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"791bca55486e3885f2eafa7f95f6892d223a5b83d86813b7ceb23f8babf495f8"} Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.501109 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"d0ef27ac8e487161d5cb9fa7f80e641c728651c5c1038e7cf204c36a4baf6dc2"} Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.593780 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.743336 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlqh7\" (UniqueName: \"kubernetes.io/projected/ddaef815-cdc9-496c-84b6-854d4d626f48-kube-api-access-zlqh7\") pod \"ddaef815-cdc9-496c-84b6-854d4d626f48\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.743760 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-util\") pod \"ddaef815-cdc9-496c-84b6-854d4d626f48\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.744178 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-bundle\") pod \"ddaef815-cdc9-496c-84b6-854d4d626f48\" (UID: \"ddaef815-cdc9-496c-84b6-854d4d626f48\") " Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.748458 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-bundle" (OuterVolumeSpecName: "bundle") pod "ddaef815-cdc9-496c-84b6-854d4d626f48" (UID: "ddaef815-cdc9-496c-84b6-854d4d626f48"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.753309 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddaef815-cdc9-496c-84b6-854d4d626f48-kube-api-access-zlqh7" (OuterVolumeSpecName: "kube-api-access-zlqh7") pod "ddaef815-cdc9-496c-84b6-854d4d626f48" (UID: "ddaef815-cdc9-496c-84b6-854d4d626f48"). InnerVolumeSpecName "kube-api-access-zlqh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.771753 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-util" (OuterVolumeSpecName: "util") pod "ddaef815-cdc9-496c-84b6-854d4d626f48" (UID: "ddaef815-cdc9-496c-84b6-854d4d626f48"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.846764 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-util\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.846823 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ddaef815-cdc9-496c-84b6-854d4d626f48-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:04 crc kubenswrapper[4825]: I0122 15:34:04.846842 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlqh7\" (UniqueName: \"kubernetes.io/projected/ddaef815-cdc9-496c-84b6-854d4d626f48-kube-api-access-zlqh7\") on node \"crc\" DevicePath \"\"" Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.510208 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" event={"ID":"ddaef815-cdc9-496c-84b6-854d4d626f48","Type":"ContainerDied","Data":"ed6c68afda2f1ebfb4e8096290571c49b51c7bce08d5f72ea506ba10a7d34ed6"} Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.510273 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6c68afda2f1ebfb4e8096290571c49b51c7bce08d5f72ea506ba10a7d34ed6" Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.510236 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn" Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.541292 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.541353 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.541399 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.542085 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c25a004991eab3ed81c43c73a2fdebc13cc5dbf35ee92f9f96732e04ec4d469"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:34:05 crc kubenswrapper[4825]: I0122 15:34:05.542156 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://8c25a004991eab3ed81c43c73a2fdebc13cc5dbf35ee92f9f96732e04ec4d469" gracePeriod=600 Jan 22 15:34:06 crc kubenswrapper[4825]: I0122 15:34:06.520157 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"f392f7931f71df723fcf4e00a1bbf06fa1d881eb2b5fd5c2c0218b0817034e5e"} Jan 22 15:34:06 crc kubenswrapper[4825]: I0122 15:34:06.523967 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="8c25a004991eab3ed81c43c73a2fdebc13cc5dbf35ee92f9f96732e04ec4d469" exitCode=0 Jan 22 15:34:06 crc kubenswrapper[4825]: I0122 15:34:06.524020 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"8c25a004991eab3ed81c43c73a2fdebc13cc5dbf35ee92f9f96732e04ec4d469"} Jan 22 15:34:06 crc kubenswrapper[4825]: I0122 15:34:06.524075 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"5ec0593b524672c0173949c1239ba7fcb03695ca8acb4008e01a270f260b0ff1"} Jan 22 15:34:06 crc kubenswrapper[4825]: I0122 15:34:06.524100 4825 scope.go:117] "RemoveContainer" containerID="70136c7cc46f39bc356a97e0057511092c22deb2e74a289548614a289b601d0b" Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.547410 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" event={"ID":"8f03cd27-76a9-43cd-bb8a-f19daba43f4a","Type":"ContainerStarted","Data":"99136fb52aadb5443f21a25ef443c5bcc777e585bf4ab722fb225677e2b66216"} Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.548000 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.548018 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.548027 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.587613 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.594023 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:09 crc kubenswrapper[4825]: I0122 15:34:09.709444 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" podStartSLOduration=7.709427269 podStartE2EDuration="7.709427269s" podCreationTimestamp="2026-01-22 15:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:34:09.70359993 +0000 UTC m=+596.465126850" watchObservedRunningTime="2026-01-22 15:34:09.709427269 +0000 UTC m=+596.470954169" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.059139 4825 scope.go:117] "RemoveContainer" containerID="529ac67b53bb0c9f6981c0bc5146a33bd9abdd0ad8529319926529fc051d9a2f" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.157336 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r"] Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.157601 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="util" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.157618 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="util" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.157639 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="pull" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.157646 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="pull" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.157657 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="extract" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.157663 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="extract" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.157798 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddaef815-cdc9-496c-84b6-854d4d626f48" containerName="extract" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.158461 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.160382 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.160858 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-m6n2s" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.161197 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.162799 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57mz\" (UniqueName: \"kubernetes.io/projected/1d4ea96c-02ed-4924-bdc0-0fa0a9932467-kube-api-access-d57mz\") pod \"obo-prometheus-operator-68bc856cb9-x6k4r\" (UID: \"1d4ea96c-02ed-4924-bdc0-0fa0a9932467\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.178272 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.264487 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d57mz\" (UniqueName: \"kubernetes.io/projected/1d4ea96c-02ed-4924-bdc0-0fa0a9932467-kube-api-access-d57mz\") pod \"obo-prometheus-operator-68bc856cb9-x6k4r\" (UID: \"1d4ea96c-02ed-4924-bdc0-0fa0a9932467\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.310206 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d57mz\" (UniqueName: \"kubernetes.io/projected/1d4ea96c-02ed-4924-bdc0-0fa0a9932467-kube-api-access-d57mz\") pod \"obo-prometheus-operator-68bc856cb9-x6k4r\" (UID: \"1d4ea96c-02ed-4924-bdc0-0fa0a9932467\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.320104 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.321081 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.325267 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-x99pl" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.326099 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.341870 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.349140 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.350152 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.372267 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.467921 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/290bb62e-9a41-4a2a-886c-803ffa414dce-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4\" (UID: \"290bb62e-9a41-4a2a-886c-803ffa414dce\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.468004 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0eec7a70-3ecb-430c-b94d-94ad04cf5ee1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs\" (UID: \"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.468066 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/290bb62e-9a41-4a2a-886c-803ffa414dce-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4\" (UID: \"290bb62e-9a41-4a2a-886c-803ffa414dce\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.468143 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0eec7a70-3ecb-430c-b94d-94ad04cf5ee1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs\" (UID: \"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.477845 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.511716 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(121fb3dc546473a59381cd843f4b1193cfad21b2d7c677a03c9e9bc52e1d7e4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.511799 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(121fb3dc546473a59381cd843f4b1193cfad21b2d7c677a03c9e9bc52e1d7e4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.511829 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(121fb3dc546473a59381cd843f4b1193cfad21b2d7c677a03c9e9bc52e1d7e4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.511879 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators(1d4ea96c-02ed-4924-bdc0-0fa0a9932467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators(1d4ea96c-02ed-4924-bdc0-0fa0a9932467)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(121fb3dc546473a59381cd843f4b1193cfad21b2d7c677a03c9e9bc52e1d7e4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" podUID="1d4ea96c-02ed-4924-bdc0-0fa0a9932467" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.519304 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-k58sz"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.520253 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.521959 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-xkbmg" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.522320 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.534498 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-k58sz"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.569417 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0eec7a70-3ecb-430c-b94d-94ad04cf5ee1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs\" (UID: \"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.569494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/290bb62e-9a41-4a2a-886c-803ffa414dce-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4\" (UID: \"290bb62e-9a41-4a2a-886c-803ffa414dce\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.569519 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0eec7a70-3ecb-430c-b94d-94ad04cf5ee1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs\" (UID: \"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.569551 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-k58sz\" (UID: \"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7\") " pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.569586 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msf5\" (UniqueName: \"kubernetes.io/projected/cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7-kube-api-access-8msf5\") pod \"observability-operator-59bdc8b94-k58sz\" (UID: \"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7\") " pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.569609 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/290bb62e-9a41-4a2a-886c-803ffa414dce-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4\" (UID: \"290bb62e-9a41-4a2a-886c-803ffa414dce\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.573024 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/290bb62e-9a41-4a2a-886c-803ffa414dce-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4\" (UID: \"290bb62e-9a41-4a2a-886c-803ffa414dce\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.573026 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0eec7a70-3ecb-430c-b94d-94ad04cf5ee1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs\" (UID: \"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.573137 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/290bb62e-9a41-4a2a-886c-803ffa414dce-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4\" (UID: \"290bb62e-9a41-4a2a-886c-803ffa414dce\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.591057 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0eec7a70-3ecb-430c-b94d-94ad04cf5ee1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs\" (UID: \"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.598321 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ljkjt_049abb37-810d-475f-b042-bceb43e81dd5/kube-multus/1.log" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.598414 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.598808 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.619517 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(2456b6d038d45e2d7def261cfa34dbcebace067a024d98698bc3a22b9f2f1531): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.619609 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(2456b6d038d45e2d7def261cfa34dbcebace067a024d98698bc3a22b9f2f1531): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.619643 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(2456b6d038d45e2d7def261cfa34dbcebace067a024d98698bc3a22b9f2f1531): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.619697 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators(1d4ea96c-02ed-4924-bdc0-0fa0a9932467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators(1d4ea96c-02ed-4924-bdc0-0fa0a9932467)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-x6k4r_openshift-operators_1d4ea96c-02ed-4924-bdc0-0fa0a9932467_0(2456b6d038d45e2d7def261cfa34dbcebace067a024d98698bc3a22b9f2f1531): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" podUID="1d4ea96c-02ed-4924-bdc0-0fa0a9932467" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.644790 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.707602 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.709754 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-k58sz\" (UID: \"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7\") " pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.709805 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msf5\" (UniqueName: \"kubernetes.io/projected/cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7-kube-api-access-8msf5\") pod \"observability-operator-59bdc8b94-k58sz\" (UID: \"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7\") " pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.716870 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-k58sz\" (UID: \"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7\") " pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.733175 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(754446fafd186166601eef3e7ebde8470fba1f978fdf54c3ddaf76f57464380e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.733258 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(754446fafd186166601eef3e7ebde8470fba1f978fdf54c3ddaf76f57464380e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.733282 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(754446fafd186166601eef3e7ebde8470fba1f978fdf54c3ddaf76f57464380e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.733343 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators(0eec7a70-3ecb-430c-b94d-94ad04cf5ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators(0eec7a70-3ecb-430c-b94d-94ad04cf5ee1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(754446fafd186166601eef3e7ebde8470fba1f978fdf54c3ddaf76f57464380e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" podUID="0eec7a70-3ecb-430c-b94d-94ad04cf5ee1" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.740447 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(ae52004a6898a922daa43ce20fdbad8eef9e37574b379f6c74a397f95ba916cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.740517 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(ae52004a6898a922daa43ce20fdbad8eef9e37574b379f6c74a397f95ba916cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.740537 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(ae52004a6898a922daa43ce20fdbad8eef9e37574b379f6c74a397f95ba916cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.740582 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators(290bb62e-9a41-4a2a-886c-803ffa414dce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators(290bb62e-9a41-4a2a-886c-803ffa414dce)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(ae52004a6898a922daa43ce20fdbad8eef9e37574b379f6c74a397f95ba916cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" podUID="290bb62e-9a41-4a2a-886c-803ffa414dce" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.747899 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msf5\" (UniqueName: \"kubernetes.io/projected/cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7-kube-api-access-8msf5\") pod \"observability-operator-59bdc8b94-k58sz\" (UID: \"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7\") " pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.779537 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8wflx"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.780642 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.794024 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4678n" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.811022 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2df2d5aa-6948-42a3-8ba0-7eedffb87020-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8wflx\" (UID: \"2df2d5aa-6948-42a3-8ba0-7eedffb87020\") " pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.811071 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2wz\" (UniqueName: \"kubernetes.io/projected/2df2d5aa-6948-42a3-8ba0-7eedffb87020-kube-api-access-9g2wz\") pod \"perses-operator-5bf474d74f-8wflx\" (UID: \"2df2d5aa-6948-42a3-8ba0-7eedffb87020\") " pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.811596 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8wflx"] Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.874920 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.903468 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(5bb2031fd92e062626dd1b913b7617a5f4a2a3f4e2ac5fddb4c67a871295ce1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.903544 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(5bb2031fd92e062626dd1b913b7617a5f4a2a3f4e2ac5fddb4c67a871295ce1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.903584 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(5bb2031fd92e062626dd1b913b7617a5f4a2a3f4e2ac5fddb4c67a871295ce1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:14 crc kubenswrapper[4825]: E0122 15:34:14.903630 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-k58sz_openshift-operators(cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-k58sz_openshift-operators(cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(5bb2031fd92e062626dd1b913b7617a5f4a2a3f4e2ac5fddb4c67a871295ce1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" podUID="cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.912579 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2df2d5aa-6948-42a3-8ba0-7eedffb87020-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8wflx\" (UID: \"2df2d5aa-6948-42a3-8ba0-7eedffb87020\") " pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.912645 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2wz\" (UniqueName: \"kubernetes.io/projected/2df2d5aa-6948-42a3-8ba0-7eedffb87020-kube-api-access-9g2wz\") pod \"perses-operator-5bf474d74f-8wflx\" (UID: \"2df2d5aa-6948-42a3-8ba0-7eedffb87020\") " pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.914124 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2df2d5aa-6948-42a3-8ba0-7eedffb87020-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8wflx\" (UID: \"2df2d5aa-6948-42a3-8ba0-7eedffb87020\") " pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:14 crc kubenswrapper[4825]: I0122 15:34:14.939071 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2wz\" (UniqueName: \"kubernetes.io/projected/2df2d5aa-6948-42a3-8ba0-7eedffb87020-kube-api-access-9g2wz\") pod \"perses-operator-5bf474d74f-8wflx\" (UID: \"2df2d5aa-6948-42a3-8ba0-7eedffb87020\") " pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.110945 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.140140 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2d4132b75ec5b9628efd6134d8d6e6453d59ac2713ef4234c4df6cde469b31e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.140217 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2d4132b75ec5b9628efd6134d8d6e6453d59ac2713ef4234c4df6cde469b31e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.140243 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2d4132b75ec5b9628efd6134d8d6e6453d59ac2713ef4234c4df6cde469b31e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.140331 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8wflx_openshift-operators(2df2d5aa-6948-42a3-8ba0-7eedffb87020)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8wflx_openshift-operators(2df2d5aa-6948-42a3-8ba0-7eedffb87020)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2d4132b75ec5b9628efd6134d8d6e6453d59ac2713ef4234c4df6cde469b31e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" podUID="2df2d5aa-6948-42a3-8ba0-7eedffb87020" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.517525 4825 scope.go:117] "RemoveContainer" containerID="f67902ec5693f8ee504f3f703021123e51609876caa0e33faeb018883a8aca56" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.605149 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.605405 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.605468 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.605413 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.606195 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.606270 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.607109 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:15 crc kubenswrapper[4825]: I0122 15:34:15.607817 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.658550 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(916c7995c6175ef6d8160b34fb93568fcded91ca673e0d0370d476bc5320c93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.658669 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(916c7995c6175ef6d8160b34fb93568fcded91ca673e0d0370d476bc5320c93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.659010 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(916c7995c6175ef6d8160b34fb93568fcded91ca673e0d0370d476bc5320c93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.659075 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators(290bb62e-9a41-4a2a-886c-803ffa414dce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators(290bb62e-9a41-4a2a-886c-803ffa414dce)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_openshift-operators_290bb62e-9a41-4a2a-886c-803ffa414dce_0(916c7995c6175ef6d8160b34fb93568fcded91ca673e0d0370d476bc5320c93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" podUID="290bb62e-9a41-4a2a-886c-803ffa414dce" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.673088 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(620008bdc4cabcd98c7c92f8ee95f61b51101035593e7daae73227b5ae932401): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.673155 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(620008bdc4cabcd98c7c92f8ee95f61b51101035593e7daae73227b5ae932401): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.673177 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(620008bdc4cabcd98c7c92f8ee95f61b51101035593e7daae73227b5ae932401): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.673227 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-k58sz_openshift-operators(cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-k58sz_openshift-operators(cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-k58sz_openshift-operators_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7_0(620008bdc4cabcd98c7c92f8ee95f61b51101035593e7daae73227b5ae932401): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" podUID="cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.682034 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(3c13ef6e26c84e831a7ca519f123295c112bf58e76388bc3ae8425d971edf230): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.682093 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(3c13ef6e26c84e831a7ca519f123295c112bf58e76388bc3ae8425d971edf230): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.682114 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(3c13ef6e26c84e831a7ca519f123295c112bf58e76388bc3ae8425d971edf230): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.682164 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators(0eec7a70-3ecb-430c-b94d-94ad04cf5ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators(0eec7a70-3ecb-430c-b94d-94ad04cf5ee1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_openshift-operators_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1_0(3c13ef6e26c84e831a7ca519f123295c112bf58e76388bc3ae8425d971edf230): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" podUID="0eec7a70-3ecb-430c-b94d-94ad04cf5ee1" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.687318 4825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2e82d8deeeac0fb7620bec6d69d0da3b21aa098c366ea189ce409e6f44ef9461): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.687366 4825 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2e82d8deeeac0fb7620bec6d69d0da3b21aa098c366ea189ce409e6f44ef9461): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.687383 4825 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2e82d8deeeac0fb7620bec6d69d0da3b21aa098c366ea189ce409e6f44ef9461): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:15 crc kubenswrapper[4825]: E0122 15:34:15.687424 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-8wflx_openshift-operators(2df2d5aa-6948-42a3-8ba0-7eedffb87020)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-8wflx_openshift-operators(2df2d5aa-6948-42a3-8ba0-7eedffb87020)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-8wflx_openshift-operators_2df2d5aa-6948-42a3-8ba0-7eedffb87020_0(2e82d8deeeac0fb7620bec6d69d0da3b21aa098c366ea189ce409e6f44ef9461): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" podUID="2df2d5aa-6948-42a3-8ba0-7eedffb87020" Jan 22 15:34:16 crc kubenswrapper[4825]: I0122 15:34:16.624833 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ljkjt_049abb37-810d-475f-b042-bceb43e81dd5/kube-multus/1.log" Jan 22 15:34:16 crc kubenswrapper[4825]: I0122 15:34:16.624889 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ljkjt" event={"ID":"049abb37-810d-475f-b042-bceb43e81dd5","Type":"ContainerStarted","Data":"f295ec2d97c66455298faf115f61a86e4dcf3e178fae3e4d179c7a828829fb51"} Jan 22 15:34:25 crc kubenswrapper[4825]: I0122 15:34:25.516846 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:25 crc kubenswrapper[4825]: I0122 15:34:25.517843 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" Jan 22 15:34:25 crc kubenswrapper[4825]: I0122 15:34:25.942432 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r"] Jan 22 15:34:25 crc kubenswrapper[4825]: W0122 15:34:25.957448 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d4ea96c_02ed_4924_bdc0_0fa0a9932467.slice/crio-34ae062371cc7c0afc6b7dbc86d0855a598a0c2aab51c6d066f6929f53b5de5e WatchSource:0}: Error finding container 34ae062371cc7c0afc6b7dbc86d0855a598a0c2aab51c6d066f6929f53b5de5e: Status 404 returned error can't find the container with id 34ae062371cc7c0afc6b7dbc86d0855a598a0c2aab51c6d066f6929f53b5de5e Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.516257 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.516312 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.516961 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.517101 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.762036 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" event={"ID":"1d4ea96c-02ed-4924-bdc0-0fa0a9932467","Type":"ContainerStarted","Data":"34ae062371cc7c0afc6b7dbc86d0855a598a0c2aab51c6d066f6929f53b5de5e"} Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.863899 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8wflx"] Jan 22 15:34:26 crc kubenswrapper[4825]: W0122 15:34:26.864762 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2df2d5aa_6948_42a3_8ba0_7eedffb87020.slice/crio-c84754df2b6d5eea22e392f370bd778b1e02e28a69f3c2956b3a24f837979fba WatchSource:0}: Error finding container c84754df2b6d5eea22e392f370bd778b1e02e28a69f3c2956b3a24f837979fba: Status 404 returned error can't find the container with id c84754df2b6d5eea22e392f370bd778b1e02e28a69f3c2956b3a24f837979fba Jan 22 15:34:26 crc kubenswrapper[4825]: I0122 15:34:26.903911 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4"] Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.516786 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.516801 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.517374 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.517768 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.787676 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" event={"ID":"2df2d5aa-6948-42a3-8ba0-7eedffb87020","Type":"ContainerStarted","Data":"c84754df2b6d5eea22e392f370bd778b1e02e28a69f3c2956b3a24f837979fba"} Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.794116 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs"] Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.795056 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" event={"ID":"290bb62e-9a41-4a2a-886c-803ffa414dce","Type":"ContainerStarted","Data":"3dcc55a8a609cf8388daf80859971d20e2867e3d1ebdb4341f56e6d31a79cbe1"} Jan 22 15:34:27 crc kubenswrapper[4825]: I0122 15:34:27.833468 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-k58sz"] Jan 22 15:34:28 crc kubenswrapper[4825]: I0122 15:34:28.830138 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" event={"ID":"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7","Type":"ContainerStarted","Data":"a25fbf4ee2d6817e587edde43a791aca6ce21b67c1aadc4df722293be3661298"} Jan 22 15:34:28 crc kubenswrapper[4825]: I0122 15:34:28.831697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" event={"ID":"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1","Type":"ContainerStarted","Data":"2beaae387c5cf267ef4bcf3d5fd411d37d4fea838226b1eff7707d0d01a51e92"} Jan 22 15:34:32 crc kubenswrapper[4825]: I0122 15:34:32.847360 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnb8w" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.080215 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" event={"ID":"1d4ea96c-02ed-4924-bdc0-0fa0a9932467","Type":"ContainerStarted","Data":"94c454d0d87644fbf6136d64ab51e7027c13c18f78d9ea8cdf23a6152a500270"} Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.081906 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" event={"ID":"cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7","Type":"ContainerStarted","Data":"e39193d1fc607e4ec68be1f9a415cb8ab970873ac286b4a38f37d20dd94fa486"} Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.082615 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.084071 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" event={"ID":"290bb62e-9a41-4a2a-886c-803ffa414dce","Type":"ContainerStarted","Data":"d083a48fe7f159e2549167f3e3a6c8b6a378fb75c1a29a63f9a94b30f9baab13"} Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.084934 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.086264 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" event={"ID":"2df2d5aa-6948-42a3-8ba0-7eedffb87020","Type":"ContainerStarted","Data":"6164f2a1500a1c7f88bd7b607beb78b7c719c69f892da7a3a47c79cc3df12749"} Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.086413 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.088082 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" event={"ID":"0eec7a70-3ecb-430c-b94d-94ad04cf5ee1","Type":"ContainerStarted","Data":"dce92193fa9ce8bd36de297240b705ca723fbdf121f7be6855ccdb66a39a2a19"} Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.109093 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6k4r" podStartSLOduration=13.15673382 podStartE2EDuration="30.109072573s" podCreationTimestamp="2026-01-22 15:34:14 +0000 UTC" firstStartedPulling="2026-01-22 15:34:25.960450165 +0000 UTC m=+612.721977075" lastFinishedPulling="2026-01-22 15:34:42.912788918 +0000 UTC m=+629.674315828" observedRunningTime="2026-01-22 15:34:44.106150359 +0000 UTC m=+630.867677289" watchObservedRunningTime="2026-01-22 15:34:44.109072573 +0000 UTC m=+630.870599493" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.144314 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" podStartSLOduration=14.07831408 podStartE2EDuration="30.14426894s" podCreationTimestamp="2026-01-22 15:34:14 +0000 UTC" firstStartedPulling="2026-01-22 15:34:26.867076113 +0000 UTC m=+613.628603023" lastFinishedPulling="2026-01-22 15:34:42.933030973 +0000 UTC m=+629.694557883" observedRunningTime="2026-01-22 15:34:44.139895084 +0000 UTC m=+630.901422004" watchObservedRunningTime="2026-01-22 15:34:44.14426894 +0000 UTC m=+630.905795860" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.190613 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-6xffs" podStartSLOduration=15.056710091 podStartE2EDuration="30.190592878s" podCreationTimestamp="2026-01-22 15:34:14 +0000 UTC" firstStartedPulling="2026-01-22 15:34:27.80379028 +0000 UTC m=+614.565317190" lastFinishedPulling="2026-01-22 15:34:42.937673077 +0000 UTC m=+629.699199977" observedRunningTime="2026-01-22 15:34:44.189943049 +0000 UTC m=+630.951469959" watchObservedRunningTime="2026-01-22 15:34:44.190592878 +0000 UTC m=+630.952119788" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.193323 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-k58sz" podStartSLOduration=15.084334659 podStartE2EDuration="30.193315867s" podCreationTimestamp="2026-01-22 15:34:14 +0000 UTC" firstStartedPulling="2026-01-22 15:34:27.845038791 +0000 UTC m=+614.606565711" lastFinishedPulling="2026-01-22 15:34:42.954020009 +0000 UTC m=+629.715546919" observedRunningTime="2026-01-22 15:34:44.163276459 +0000 UTC m=+630.924803369" watchObservedRunningTime="2026-01-22 15:34:44.193315867 +0000 UTC m=+630.954842777" Jan 22 15:34:44 crc kubenswrapper[4825]: I0122 15:34:44.226030 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4" podStartSLOduration=14.228530308 podStartE2EDuration="30.226011691s" podCreationTimestamp="2026-01-22 15:34:14 +0000 UTC" firstStartedPulling="2026-01-22 15:34:26.914286726 +0000 UTC m=+613.675813636" lastFinishedPulling="2026-01-22 15:34:42.911768109 +0000 UTC m=+629.673295019" observedRunningTime="2026-01-22 15:34:44.22147042 +0000 UTC m=+630.982997330" watchObservedRunningTime="2026-01-22 15:34:44.226011691 +0000 UTC m=+630.987538591" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.114484 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8wflx" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.750192 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf"] Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.751198 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.755248 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.755862 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.756091 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6h72x" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.765877 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-8rcmg"] Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.771932 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf"] Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.772069 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8rcmg" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.781421 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bs6lg" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.784477 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8rcmg"] Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.793332 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hf8bj"] Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.794215 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.801138 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kxfvq" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.814297 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hf8bj"] Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.932887 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf75h\" (UniqueName: \"kubernetes.io/projected/25d0f3c8-a90d-468d-97bf-61ce52c80b40-kube-api-access-vf75h\") pod \"cert-manager-858654f9db-8rcmg\" (UID: \"25d0f3c8-a90d-468d-97bf-61ce52c80b40\") " pod="cert-manager/cert-manager-858654f9db-8rcmg" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.932954 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2n5\" (UniqueName: \"kubernetes.io/projected/a55ed53b-731c-41a4-8f41-8baa28baf731-kube-api-access-mm2n5\") pod \"cert-manager-cainjector-cf98fcc89-mvpnf\" (UID: \"a55ed53b-731c-41a4-8f41-8baa28baf731\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" Jan 22 15:34:55 crc kubenswrapper[4825]: I0122 15:34:55.933059 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdxd\" (UniqueName: \"kubernetes.io/projected/f957c7cc-d8d5-435b-976d-3fe554887cc0-kube-api-access-fsdxd\") pod \"cert-manager-webhook-687f57d79b-hf8bj\" (UID: \"f957c7cc-d8d5-435b-976d-3fe554887cc0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.034141 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf75h\" (UniqueName: \"kubernetes.io/projected/25d0f3c8-a90d-468d-97bf-61ce52c80b40-kube-api-access-vf75h\") pod \"cert-manager-858654f9db-8rcmg\" (UID: \"25d0f3c8-a90d-468d-97bf-61ce52c80b40\") " pod="cert-manager/cert-manager-858654f9db-8rcmg" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.034207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2n5\" (UniqueName: \"kubernetes.io/projected/a55ed53b-731c-41a4-8f41-8baa28baf731-kube-api-access-mm2n5\") pod \"cert-manager-cainjector-cf98fcc89-mvpnf\" (UID: \"a55ed53b-731c-41a4-8f41-8baa28baf731\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.034242 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdxd\" (UniqueName: \"kubernetes.io/projected/f957c7cc-d8d5-435b-976d-3fe554887cc0-kube-api-access-fsdxd\") pod \"cert-manager-webhook-687f57d79b-hf8bj\" (UID: \"f957c7cc-d8d5-435b-976d-3fe554887cc0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.054860 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2n5\" (UniqueName: \"kubernetes.io/projected/a55ed53b-731c-41a4-8f41-8baa28baf731-kube-api-access-mm2n5\") pod \"cert-manager-cainjector-cf98fcc89-mvpnf\" (UID: \"a55ed53b-731c-41a4-8f41-8baa28baf731\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.054865 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf75h\" (UniqueName: \"kubernetes.io/projected/25d0f3c8-a90d-468d-97bf-61ce52c80b40-kube-api-access-vf75h\") pod \"cert-manager-858654f9db-8rcmg\" (UID: \"25d0f3c8-a90d-468d-97bf-61ce52c80b40\") " pod="cert-manager/cert-manager-858654f9db-8rcmg" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.055146 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdxd\" (UniqueName: \"kubernetes.io/projected/f957c7cc-d8d5-435b-976d-3fe554887cc0-kube-api-access-fsdxd\") pod \"cert-manager-webhook-687f57d79b-hf8bj\" (UID: \"f957c7cc-d8d5-435b-976d-3fe554887cc0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.084764 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.117344 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8rcmg" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.128745 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:34:56 crc kubenswrapper[4825]: I0122 15:34:56.726608 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8rcmg"] Jan 22 15:34:56 crc kubenswrapper[4825]: W0122 15:34:56.729376 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d0f3c8_a90d_468d_97bf_61ce52c80b40.slice/crio-f699a105fe231935a57f319bde3da65dd4cea77f3ac753de55671724e4a72cf2 WatchSource:0}: Error finding container f699a105fe231935a57f319bde3da65dd4cea77f3ac753de55671724e4a72cf2: Status 404 returned error can't find the container with id f699a105fe231935a57f319bde3da65dd4cea77f3ac753de55671724e4a72cf2 Jan 22 15:34:57 crc kubenswrapper[4825]: I0122 15:34:57.014435 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf"] Jan 22 15:34:57 crc kubenswrapper[4825]: I0122 15:34:57.025457 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hf8bj"] Jan 22 15:34:57 crc kubenswrapper[4825]: W0122 15:34:57.027501 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf957c7cc_d8d5_435b_976d_3fe554887cc0.slice/crio-055764a7b6435e58eed581abfa1a7b600745ccc7fe28f85bac2c287c5c7a45b7 WatchSource:0}: Error finding container 055764a7b6435e58eed581abfa1a7b600745ccc7fe28f85bac2c287c5c7a45b7: Status 404 returned error can't find the container with id 055764a7b6435e58eed581abfa1a7b600745ccc7fe28f85bac2c287c5c7a45b7 Jan 22 15:34:57 crc kubenswrapper[4825]: I0122 15:34:57.432834 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" event={"ID":"a55ed53b-731c-41a4-8f41-8baa28baf731","Type":"ContainerStarted","Data":"d42d72d099a4ac082c3acd26fd2dc2c0cf2c348ddf9bfd786a1ac1515428b378"} Jan 22 15:34:57 crc kubenswrapper[4825]: I0122 15:34:57.435541 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" event={"ID":"f957c7cc-d8d5-435b-976d-3fe554887cc0","Type":"ContainerStarted","Data":"055764a7b6435e58eed581abfa1a7b600745ccc7fe28f85bac2c287c5c7a45b7"} Jan 22 15:34:57 crc kubenswrapper[4825]: I0122 15:34:57.441266 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8rcmg" event={"ID":"25d0f3c8-a90d-468d-97bf-61ce52c80b40","Type":"ContainerStarted","Data":"f699a105fe231935a57f319bde3da65dd4cea77f3ac753de55671724e4a72cf2"} Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.505416 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" event={"ID":"a55ed53b-731c-41a4-8f41-8baa28baf731","Type":"ContainerStarted","Data":"e92227c87d8b8b86493c5dda78061a51607c1a7258749c3db54934e37ce65a9e"} Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.508258 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" event={"ID":"f957c7cc-d8d5-435b-976d-3fe554887cc0","Type":"ContainerStarted","Data":"e580113a12a779bd1ea644270dc40241a2c112da204fc18ef63c0fc37eada3c8"} Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.509204 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.511594 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8rcmg" event={"ID":"25d0f3c8-a90d-468d-97bf-61ce52c80b40","Type":"ContainerStarted","Data":"96fb11a825410ac77c0be7d4d4f2ae8a3ad80dcbce4d74c5be1c65ec7eaee915"} Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.525021 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvpnf" podStartSLOduration=3.888242796 podStartE2EDuration="8.525003868s" podCreationTimestamp="2026-01-22 15:34:55 +0000 UTC" firstStartedPulling="2026-01-22 15:34:57.025398257 +0000 UTC m=+643.786925167" lastFinishedPulling="2026-01-22 15:35:01.662159309 +0000 UTC m=+648.423686239" observedRunningTime="2026-01-22 15:35:03.523885935 +0000 UTC m=+650.285412875" watchObservedRunningTime="2026-01-22 15:35:03.525003868 +0000 UTC m=+650.286530778" Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.630443 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" podStartSLOduration=2.94494896 podStartE2EDuration="8.630419653s" podCreationTimestamp="2026-01-22 15:34:55 +0000 UTC" firstStartedPulling="2026-01-22 15:34:57.031683449 +0000 UTC m=+643.793210359" lastFinishedPulling="2026-01-22 15:35:02.717154142 +0000 UTC m=+649.478681052" observedRunningTime="2026-01-22 15:35:03.627594911 +0000 UTC m=+650.389121821" watchObservedRunningTime="2026-01-22 15:35:03.630419653 +0000 UTC m=+650.391946563" Jan 22 15:35:03 crc kubenswrapper[4825]: I0122 15:35:03.659727 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-8rcmg" podStartSLOduration=2.746717414 podStartE2EDuration="8.659704439s" podCreationTimestamp="2026-01-22 15:34:55 +0000 UTC" firstStartedPulling="2026-01-22 15:34:56.731297462 +0000 UTC m=+643.492824362" lastFinishedPulling="2026-01-22 15:35:02.644284467 +0000 UTC m=+649.405811387" observedRunningTime="2026-01-22 15:35:03.643443139 +0000 UTC m=+650.404970059" watchObservedRunningTime="2026-01-22 15:35:03.659704439 +0000 UTC m=+650.421231349" Jan 22 15:35:11 crc kubenswrapper[4825]: I0122 15:35:11.136890 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-hf8bj" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.700340 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh"] Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.702238 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.705594 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.712583 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh"] Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.896652 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krbz\" (UniqueName: \"kubernetes.io/projected/6070a053-75b3-46a8-9e38-b6a1ad5324a8-kube-api-access-5krbz\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.896732 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.896797 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.998121 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krbz\" (UniqueName: \"kubernetes.io/projected/6070a053-75b3-46a8-9e38-b6a1ad5324a8-kube-api-access-5krbz\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.998179 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.998211 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.998765 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:44 crc kubenswrapper[4825]: I0122 15:35:44.998808 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:45 crc kubenswrapper[4825]: I0122 15:35:45.021518 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krbz\" (UniqueName: \"kubernetes.io/projected/6070a053-75b3-46a8-9e38-b6a1ad5324a8-kube-api-access-5krbz\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:45 crc kubenswrapper[4825]: I0122 15:35:45.061368 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:45 crc kubenswrapper[4825]: I0122 15:35:45.314144 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh"] Jan 22 15:35:45 crc kubenswrapper[4825]: I0122 15:35:45.983868 4825 generic.go:334] "Generic (PLEG): container finished" podID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerID="71507ae73cffa5a24e1074ce1b51bc4ca152bfdf52620d36598b1c8bb1ae8019" exitCode=0 Jan 22 15:35:45 crc kubenswrapper[4825]: I0122 15:35:45.984121 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" event={"ID":"6070a053-75b3-46a8-9e38-b6a1ad5324a8","Type":"ContainerDied","Data":"71507ae73cffa5a24e1074ce1b51bc4ca152bfdf52620d36598b1c8bb1ae8019"} Jan 22 15:35:45 crc kubenswrapper[4825]: I0122 15:35:45.984306 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" event={"ID":"6070a053-75b3-46a8-9e38-b6a1ad5324a8","Type":"ContainerStarted","Data":"cae6e8da0ca691b6a5e654330d7f7792948dd6e18507910f13d57bbe3b6b8e61"} Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.150705 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.152049 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.154277 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.155643 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.172965 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.343913 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdtz\" (UniqueName: \"kubernetes.io/projected/2327a166-8745-452c-9a35-af6ece5770f6-kube-api-access-djdtz\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") " pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.344083 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") " pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.445788 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdtz\" (UniqueName: \"kubernetes.io/projected/2327a166-8745-452c-9a35-af6ece5770f6-kube-api-access-djdtz\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") " pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.445893 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") " pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.450132 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.450180 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ba401c29e64ad2d3044a5ec895152d9ce37380aed42bc05927455e1f8f4b58b/globalmount\"" pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.469892 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdtz\" (UniqueName: \"kubernetes.io/projected/2327a166-8745-452c-9a35-af6ece5770f6-kube-api-access-djdtz\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") " pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.492910 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d86d341-e786-4f78-bcbf-c1ad13908c9a\") pod \"minio\" (UID: \"2327a166-8745-452c-9a35-af6ece5770f6\") " pod="minio-dev/minio" Jan 22 15:35:48 crc kubenswrapper[4825]: I0122 15:35:48.773882 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 22 15:35:49 crc kubenswrapper[4825]: I0122 15:35:49.033263 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 22 15:35:50 crc kubenswrapper[4825]: I0122 15:35:50.019703 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2327a166-8745-452c-9a35-af6ece5770f6","Type":"ContainerStarted","Data":"fe625c0e91287b0233fe23a2ec2f3be54e83c5d104a2c21b44ff22e8bb7cb2e0"} Jan 22 15:35:56 crc kubenswrapper[4825]: I0122 15:35:56.063344 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2327a166-8745-452c-9a35-af6ece5770f6","Type":"ContainerStarted","Data":"16a25dd234f4304fc260f0d25c994bc971507a92e4ef93a2d672ecb8ab218d13"} Jan 22 15:35:56 crc kubenswrapper[4825]: I0122 15:35:56.066214 4825 generic.go:334] "Generic (PLEG): container finished" podID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerID="9d4c37b5e122eecad4f2977bf07c63acb1f9b036b75ca51c06c24bb181f2b01f" exitCode=0 Jan 22 15:35:56 crc kubenswrapper[4825]: I0122 15:35:56.066277 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" event={"ID":"6070a053-75b3-46a8-9e38-b6a1ad5324a8","Type":"ContainerDied","Data":"9d4c37b5e122eecad4f2977bf07c63acb1f9b036b75ca51c06c24bb181f2b01f"} Jan 22 15:35:56 crc kubenswrapper[4825]: I0122 15:35:56.112360 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.668503171 podStartE2EDuration="11.11234112s" podCreationTimestamp="2026-01-22 15:35:45 +0000 UTC" firstStartedPulling="2026-01-22 15:35:49.038467923 +0000 UTC m=+695.799994833" lastFinishedPulling="2026-01-22 15:35:55.482305832 +0000 UTC m=+702.243832782" observedRunningTime="2026-01-22 15:35:56.089769458 +0000 UTC m=+702.851296368" watchObservedRunningTime="2026-01-22 15:35:56.11234112 +0000 UTC m=+702.873868030" Jan 22 15:35:57 crc kubenswrapper[4825]: I0122 15:35:57.079280 4825 generic.go:334] "Generic (PLEG): container finished" podID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerID="6b1b3c3847508ba13dcfb7fb6a62a8607305083965a6d6ce2313a0fa558f1d2f" exitCode=0 Jan 22 15:35:57 crc kubenswrapper[4825]: I0122 15:35:57.079331 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" event={"ID":"6070a053-75b3-46a8-9e38-b6a1ad5324a8","Type":"ContainerDied","Data":"6b1b3c3847508ba13dcfb7fb6a62a8607305083965a6d6ce2313a0fa558f1d2f"} Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.450037 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.557078 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-util\") pod \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.557358 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5krbz\" (UniqueName: \"kubernetes.io/projected/6070a053-75b3-46a8-9e38-b6a1ad5324a8-kube-api-access-5krbz\") pod \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.558356 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-bundle\") pod \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\" (UID: \"6070a053-75b3-46a8-9e38-b6a1ad5324a8\") " Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.559730 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-bundle" (OuterVolumeSpecName: "bundle") pod "6070a053-75b3-46a8-9e38-b6a1ad5324a8" (UID: "6070a053-75b3-46a8-9e38-b6a1ad5324a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.562596 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6070a053-75b3-46a8-9e38-b6a1ad5324a8-kube-api-access-5krbz" (OuterVolumeSpecName: "kube-api-access-5krbz") pod "6070a053-75b3-46a8-9e38-b6a1ad5324a8" (UID: "6070a053-75b3-46a8-9e38-b6a1ad5324a8"). InnerVolumeSpecName "kube-api-access-5krbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.568656 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-util" (OuterVolumeSpecName: "util") pod "6070a053-75b3-46a8-9e38-b6a1ad5324a8" (UID: "6070a053-75b3-46a8-9e38-b6a1ad5324a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.660123 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.660155 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6070a053-75b3-46a8-9e38-b6a1ad5324a8-util\") on node \"crc\" DevicePath \"\"" Jan 22 15:35:58 crc kubenswrapper[4825]: I0122 15:35:58.660164 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5krbz\" (UniqueName: \"kubernetes.io/projected/6070a053-75b3-46a8-9e38-b6a1ad5324a8-kube-api-access-5krbz\") on node \"crc\" DevicePath \"\"" Jan 22 15:35:59 crc kubenswrapper[4825]: I0122 15:35:59.110676 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" event={"ID":"6070a053-75b3-46a8-9e38-b6a1ad5324a8","Type":"ContainerDied","Data":"cae6e8da0ca691b6a5e654330d7f7792948dd6e18507910f13d57bbe3b6b8e61"} Jan 22 15:35:59 crc kubenswrapper[4825]: I0122 15:35:59.110718 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae6e8da0ca691b6a5e654330d7f7792948dd6e18507910f13d57bbe3b6b8e61" Jan 22 15:35:59 crc kubenswrapper[4825]: I0122 15:35:59.110791 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.730880 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt"] Jan 22 15:36:03 crc kubenswrapper[4825]: E0122 15:36:03.732736 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="util" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.732828 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="util" Jan 22 15:36:03 crc kubenswrapper[4825]: E0122 15:36:03.732902 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="extract" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.733029 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="extract" Jan 22 15:36:03 crc kubenswrapper[4825]: E0122 15:36:03.733126 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="pull" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.733196 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="pull" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.733409 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6070a053-75b3-46a8-9e38-b6a1ad5324a8" containerName="extract" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.734296 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.737690 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.737796 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.738004 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.738099 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.738208 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.738789 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-thhzq" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.753588 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt"] Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.838421 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-apiservice-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.838493 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.838513 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-webhook-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.838564 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/52f9b085-39e8-4a44-93c0-be3d751cb667-manager-config\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.838610 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghmj\" (UniqueName: \"kubernetes.io/projected/52f9b085-39e8-4a44-93c0-be3d751cb667-kube-api-access-2ghmj\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.939581 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-apiservice-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.939639 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.939657 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-webhook-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.939692 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/52f9b085-39e8-4a44-93c0-be3d751cb667-manager-config\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.939741 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghmj\" (UniqueName: \"kubernetes.io/projected/52f9b085-39e8-4a44-93c0-be3d751cb667-kube-api-access-2ghmj\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.940663 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/52f9b085-39e8-4a44-93c0-be3d751cb667-manager-config\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.944453 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-webhook-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.944792 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.945353 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52f9b085-39e8-4a44-93c0-be3d751cb667-apiservice-cert\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:03 crc kubenswrapper[4825]: I0122 15:36:03.955706 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghmj\" (UniqueName: \"kubernetes.io/projected/52f9b085-39e8-4a44-93c0-be3d751cb667-kube-api-access-2ghmj\") pod \"loki-operator-controller-manager-5867677bb9-kgwmt\" (UID: \"52f9b085-39e8-4a44-93c0-be3d751cb667\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:04 crc kubenswrapper[4825]: I0122 15:36:04.051355 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:04 crc kubenswrapper[4825]: I0122 15:36:04.355500 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt"] Jan 22 15:36:05 crc kubenswrapper[4825]: I0122 15:36:05.343060 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" event={"ID":"52f9b085-39e8-4a44-93c0-be3d751cb667","Type":"ContainerStarted","Data":"0bd25e3efcffbb1b36b27cb815d65e75b6253813767f94865f5661cd3959a89d"} Jan 22 15:36:05 crc kubenswrapper[4825]: I0122 15:36:05.542119 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:36:05 crc kubenswrapper[4825]: I0122 15:36:05.542192 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:36:09 crc kubenswrapper[4825]: I0122 15:36:09.614665 4825 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 15:36:12 crc kubenswrapper[4825]: I0122 15:36:12.442625 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" event={"ID":"52f9b085-39e8-4a44-93c0-be3d751cb667","Type":"ContainerStarted","Data":"60f7f1bbbaa6cc34abce13a32384f11074972d026916188b6e18cb41475b61b9"} Jan 22 15:36:20 crc kubenswrapper[4825]: I0122 15:36:20.500288 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" event={"ID":"52f9b085-39e8-4a44-93c0-be3d751cb667","Type":"ContainerStarted","Data":"d3e6ea713db65de8a950fd4f224a1aa7b93254b52fd591de4520f52ee0bdeea9"} Jan 22 15:36:20 crc kubenswrapper[4825]: I0122 15:36:20.500901 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:20 crc kubenswrapper[4825]: I0122 15:36:20.503086 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" Jan 22 15:36:20 crc kubenswrapper[4825]: I0122 15:36:20.540242 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5867677bb9-kgwmt" podStartSLOduration=2.288262161 podStartE2EDuration="17.540213191s" podCreationTimestamp="2026-01-22 15:36:03 +0000 UTC" firstStartedPulling="2026-01-22 15:36:04.363508173 +0000 UTC m=+711.125035083" lastFinishedPulling="2026-01-22 15:36:19.615459173 +0000 UTC m=+726.376986113" observedRunningTime="2026-01-22 15:36:20.530938853 +0000 UTC m=+727.292465833" watchObservedRunningTime="2026-01-22 15:36:20.540213191 +0000 UTC m=+727.301740141" Jan 22 15:36:35 crc kubenswrapper[4825]: I0122 15:36:35.542442 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:36:35 crc kubenswrapper[4825]: I0122 15:36:35.543248 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.226625 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc"] Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.229528 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.231319 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.242590 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc"] Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.284451 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62kg\" (UniqueName: \"kubernetes.io/projected/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-kube-api-access-q62kg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.284502 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.284591 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.385417 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.385551 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q62kg\" (UniqueName: \"kubernetes.io/projected/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-kube-api-access-q62kg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.385626 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.386282 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.390520 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.403790 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q62kg\" (UniqueName: \"kubernetes.io/projected/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-kube-api-access-q62kg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.584832 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:46 crc kubenswrapper[4825]: I0122 15:36:46.932540 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc"] Jan 22 15:36:47 crc kubenswrapper[4825]: I0122 15:36:47.715534 4825 generic.go:334] "Generic (PLEG): container finished" podID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerID="221f517779872fe94f4a817b665eb039114fe1ce0d8c4c67b1070a6decf58a27" exitCode=0 Jan 22 15:36:47 crc kubenswrapper[4825]: I0122 15:36:47.715576 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" event={"ID":"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6","Type":"ContainerDied","Data":"221f517779872fe94f4a817b665eb039114fe1ce0d8c4c67b1070a6decf58a27"} Jan 22 15:36:47 crc kubenswrapper[4825]: I0122 15:36:47.715906 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" event={"ID":"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6","Type":"ContainerStarted","Data":"1bfe0204ce5c8b64228ceefb0ca37aebc7149faf3c474b3680e9208ad0e752d4"} Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.578600 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lnhgp"] Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.580247 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.603132 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnhgp"] Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.752374 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-utilities\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.752456 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjtc\" (UniqueName: \"kubernetes.io/projected/0f85a671-e159-4cd7-82d2-f31fd4557ee6-kube-api-access-kvjtc\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.752632 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-catalog-content\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.854011 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-utilities\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.854080 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjtc\" (UniqueName: \"kubernetes.io/projected/0f85a671-e159-4cd7-82d2-f31fd4557ee6-kube-api-access-kvjtc\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.854121 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-catalog-content\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.854534 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-catalog-content\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.854537 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-utilities\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.876727 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjtc\" (UniqueName: \"kubernetes.io/projected/0f85a671-e159-4cd7-82d2-f31fd4557ee6-kube-api-access-kvjtc\") pod \"redhat-operators-lnhgp\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:48 crc kubenswrapper[4825]: I0122 15:36:48.910441 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:49 crc kubenswrapper[4825]: I0122 15:36:49.306062 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnhgp"] Jan 22 15:36:49 crc kubenswrapper[4825]: I0122 15:36:49.730021 4825 generic.go:334] "Generic (PLEG): container finished" podID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerID="2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95" exitCode=0 Jan 22 15:36:49 crc kubenswrapper[4825]: I0122 15:36:49.730078 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerDied","Data":"2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95"} Jan 22 15:36:49 crc kubenswrapper[4825]: I0122 15:36:49.730115 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerStarted","Data":"e03816a09ef5029b4d76d50dadf4376efe620d3f63a8b8b39812306e30c4040f"} Jan 22 15:36:50 crc kubenswrapper[4825]: I0122 15:36:50.738673 4825 generic.go:334] "Generic (PLEG): container finished" podID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerID="70b9666d4da0745d3e0b1d41617d14dafe952842c10db399303b007d7cd4748f" exitCode=0 Jan 22 15:36:50 crc kubenswrapper[4825]: I0122 15:36:50.738767 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" event={"ID":"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6","Type":"ContainerDied","Data":"70b9666d4da0745d3e0b1d41617d14dafe952842c10db399303b007d7cd4748f"} Jan 22 15:36:51 crc kubenswrapper[4825]: I0122 15:36:51.748619 4825 generic.go:334] "Generic (PLEG): container finished" podID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerID="d2b16719c0663026be63970f7bae5bd0f5ebe799a3fcc31e1683afb7ac58c9e1" exitCode=0 Jan 22 15:36:51 crc kubenswrapper[4825]: I0122 15:36:51.748702 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" event={"ID":"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6","Type":"ContainerDied","Data":"d2b16719c0663026be63970f7bae5bd0f5ebe799a3fcc31e1683afb7ac58c9e1"} Jan 22 15:36:52 crc kubenswrapper[4825]: I0122 15:36:52.760040 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerStarted","Data":"477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230"} Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.344353 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.349193 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q62kg\" (UniqueName: \"kubernetes.io/projected/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-kube-api-access-q62kg\") pod \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.349278 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-util\") pod \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.349331 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-bundle\") pod \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\" (UID: \"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6\") " Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.349934 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-bundle" (OuterVolumeSpecName: "bundle") pod "ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" (UID: "ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.354282 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-kube-api-access-q62kg" (OuterVolumeSpecName: "kube-api-access-q62kg") pod "ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" (UID: "ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6"). InnerVolumeSpecName "kube-api-access-q62kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.360702 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-util" (OuterVolumeSpecName: "util") pod "ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" (UID: "ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.450181 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-util\") on node \"crc\" DevicePath \"\"" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.450228 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.450238 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q62kg\" (UniqueName: \"kubernetes.io/projected/ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6-kube-api-access-q62kg\") on node \"crc\" DevicePath \"\"" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.791642 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.792227 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc" event={"ID":"ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6","Type":"ContainerDied","Data":"1bfe0204ce5c8b64228ceefb0ca37aebc7149faf3c474b3680e9208ad0e752d4"} Jan 22 15:36:53 crc kubenswrapper[4825]: I0122 15:36:53.792256 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfe0204ce5c8b64228ceefb0ca37aebc7149faf3c474b3680e9208ad0e752d4" Jan 22 15:36:54 crc kubenswrapper[4825]: I0122 15:36:54.798132 4825 generic.go:334] "Generic (PLEG): container finished" podID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerID="477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230" exitCode=0 Jan 22 15:36:54 crc kubenswrapper[4825]: I0122 15:36:54.798210 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerDied","Data":"477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230"} Jan 22 15:36:55 crc kubenswrapper[4825]: I0122 15:36:55.807022 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerStarted","Data":"f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18"} Jan 22 15:36:55 crc kubenswrapper[4825]: I0122 15:36:55.826520 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lnhgp" podStartSLOduration=2.363707484 podStartE2EDuration="7.826498722s" podCreationTimestamp="2026-01-22 15:36:48 +0000 UTC" firstStartedPulling="2026-01-22 15:36:49.731486078 +0000 UTC m=+756.493012978" lastFinishedPulling="2026-01-22 15:36:55.194277306 +0000 UTC m=+761.955804216" observedRunningTime="2026-01-22 15:36:55.822801906 +0000 UTC m=+762.584328816" watchObservedRunningTime="2026-01-22 15:36:55.826498722 +0000 UTC m=+762.588025632" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.191158 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bjdhf"] Jan 22 15:36:56 crc kubenswrapper[4825]: E0122 15:36:56.191437 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="extract" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.191454 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="extract" Jan 22 15:36:56 crc kubenswrapper[4825]: E0122 15:36:56.191466 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="util" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.191480 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="util" Jan 22 15:36:56 crc kubenswrapper[4825]: E0122 15:36:56.191504 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="pull" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.191511 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="pull" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.191646 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6" containerName="extract" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.192194 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.195021 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.195041 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zrbj7" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.195021 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.216187 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bjdhf"] Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.268486 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9x7v\" (UniqueName: \"kubernetes.io/projected/072bae22-8fb1-4abb-ab89-da32c2282f11-kube-api-access-l9x7v\") pod \"nmstate-operator-646758c888-bjdhf\" (UID: \"072bae22-8fb1-4abb-ab89-da32c2282f11\") " pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.369822 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9x7v\" (UniqueName: \"kubernetes.io/projected/072bae22-8fb1-4abb-ab89-da32c2282f11-kube-api-access-l9x7v\") pod \"nmstate-operator-646758c888-bjdhf\" (UID: \"072bae22-8fb1-4abb-ab89-da32c2282f11\") " pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.483115 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9x7v\" (UniqueName: \"kubernetes.io/projected/072bae22-8fb1-4abb-ab89-da32c2282f11-kube-api-access-l9x7v\") pod \"nmstate-operator-646758c888-bjdhf\" (UID: \"072bae22-8fb1-4abb-ab89-da32c2282f11\") " pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" Jan 22 15:36:56 crc kubenswrapper[4825]: I0122 15:36:56.566302 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" Jan 22 15:36:57 crc kubenswrapper[4825]: I0122 15:36:57.318420 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bjdhf"] Jan 22 15:36:57 crc kubenswrapper[4825]: I0122 15:36:57.823253 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" event={"ID":"072bae22-8fb1-4abb-ab89-da32c2282f11","Type":"ContainerStarted","Data":"85eb6e6b6b7fa7a90e1fa4d9283dedd7da9b4e0cf55bb5a053e6ef5bc3e02ecb"} Jan 22 15:36:58 crc kubenswrapper[4825]: I0122 15:36:58.910810 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:58 crc kubenswrapper[4825]: I0122 15:36:58.911124 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:36:59 crc kubenswrapper[4825]: I0122 15:36:59.957153 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lnhgp" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="registry-server" probeResult="failure" output=< Jan 22 15:36:59 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:36:59 crc kubenswrapper[4825]: > Jan 22 15:37:00 crc kubenswrapper[4825]: I0122 15:37:00.839898 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" event={"ID":"072bae22-8fb1-4abb-ab89-da32c2282f11","Type":"ContainerStarted","Data":"7d67e79ac0297664eb137a5e00afd4fe9747297662e62e2a3690e428662c062d"} Jan 22 15:37:00 crc kubenswrapper[4825]: I0122 15:37:00.862387 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bjdhf" podStartSLOduration=1.6582149130000001 podStartE2EDuration="4.862368567s" podCreationTimestamp="2026-01-22 15:36:56 +0000 UTC" firstStartedPulling="2026-01-22 15:36:57.331523865 +0000 UTC m=+764.093050785" lastFinishedPulling="2026-01-22 15:37:00.535677529 +0000 UTC m=+767.297204439" observedRunningTime="2026-01-22 15:37:00.858382571 +0000 UTC m=+767.619909481" watchObservedRunningTime="2026-01-22 15:37:00.862368567 +0000 UTC m=+767.623895487" Jan 22 15:37:05 crc kubenswrapper[4825]: I0122 15:37:05.542186 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:37:05 crc kubenswrapper[4825]: I0122 15:37:05.542580 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:37:05 crc kubenswrapper[4825]: I0122 15:37:05.542641 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:37:05 crc kubenswrapper[4825]: I0122 15:37:05.543327 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ec0593b524672c0173949c1239ba7fcb03695ca8acb4008e01a270f260b0ff1"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:37:05 crc kubenswrapper[4825]: I0122 15:37:05.543413 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://5ec0593b524672c0173949c1239ba7fcb03695ca8acb4008e01a270f260b0ff1" gracePeriod=600 Jan 22 15:37:06 crc kubenswrapper[4825]: I0122 15:37:06.880640 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="5ec0593b524672c0173949c1239ba7fcb03695ca8acb4008e01a270f260b0ff1" exitCode=0 Jan 22 15:37:06 crc kubenswrapper[4825]: I0122 15:37:06.880710 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"5ec0593b524672c0173949c1239ba7fcb03695ca8acb4008e01a270f260b0ff1"} Jan 22 15:37:06 crc kubenswrapper[4825]: I0122 15:37:06.880948 4825 scope.go:117] "RemoveContainer" containerID="8c25a004991eab3ed81c43c73a2fdebc13cc5dbf35ee92f9f96732e04ec4d469" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.504232 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2vr9x"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.505577 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.512927 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2vr9x"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.520605 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6847p" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.531800 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wbmlw"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.532851 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.545074 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.545969 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.549600 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.578941 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.658492 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-ovs-socket\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.658554 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfbz\" (UniqueName: \"kubernetes.io/projected/cb73c1aa-4e2b-40fd-aebe-21d16e031e60-kube-api-access-ljfbz\") pod \"nmstate-webhook-8474b5b9d8-nncrc\" (UID: \"cb73c1aa-4e2b-40fd-aebe-21d16e031e60\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.658884 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-dbus-socket\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.658961 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cb7x\" (UniqueName: \"kubernetes.io/projected/61e185b2-1b85-42f6-be2f-7e2d9d698453-kube-api-access-4cb7x\") pod \"nmstate-metrics-54757c584b-2vr9x\" (UID: \"61e185b2-1b85-42f6-be2f-7e2d9d698453\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.659154 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb73c1aa-4e2b-40fd-aebe-21d16e031e60-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nncrc\" (UID: \"cb73c1aa-4e2b-40fd-aebe-21d16e031e60\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.659252 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-nmstate-lock\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.659301 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtl4\" (UniqueName: \"kubernetes.io/projected/cb155748-60ea-4ba4-8add-144027528478-kube-api-access-djtl4\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.667860 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.668862 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.672688 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.679344 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.679482 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pxx25" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.680380 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760476 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/daec0120-0078-4bfc-a484-c8e25bce75cc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760524 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-ovs-socket\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760568 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-ovs-socket\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760581 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfbz\" (UniqueName: \"kubernetes.io/projected/cb73c1aa-4e2b-40fd-aebe-21d16e031e60-kube-api-access-ljfbz\") pod \"nmstate-webhook-8474b5b9d8-nncrc\" (UID: \"cb73c1aa-4e2b-40fd-aebe-21d16e031e60\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760622 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-dbus-socket\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760640 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cb7x\" (UniqueName: \"kubernetes.io/projected/61e185b2-1b85-42f6-be2f-7e2d9d698453-kube-api-access-4cb7x\") pod \"nmstate-metrics-54757c584b-2vr9x\" (UID: \"61e185b2-1b85-42f6-be2f-7e2d9d698453\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760656 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/daec0120-0078-4bfc-a484-c8e25bce75cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760722 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srjn\" (UniqueName: \"kubernetes.io/projected/daec0120-0078-4bfc-a484-c8e25bce75cc-kube-api-access-2srjn\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760753 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb73c1aa-4e2b-40fd-aebe-21d16e031e60-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nncrc\" (UID: \"cb73c1aa-4e2b-40fd-aebe-21d16e031e60\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760877 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-nmstate-lock\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760900 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtl4\" (UniqueName: \"kubernetes.io/projected/cb155748-60ea-4ba4-8add-144027528478-kube-api-access-djtl4\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760959 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-nmstate-lock\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.760963 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cb155748-60ea-4ba4-8add-144027528478-dbus-socket\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.766718 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb73c1aa-4e2b-40fd-aebe-21d16e031e60-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nncrc\" (UID: \"cb73c1aa-4e2b-40fd-aebe-21d16e031e60\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.787932 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfbz\" (UniqueName: \"kubernetes.io/projected/cb73c1aa-4e2b-40fd-aebe-21d16e031e60-kube-api-access-ljfbz\") pod \"nmstate-webhook-8474b5b9d8-nncrc\" (UID: \"cb73c1aa-4e2b-40fd-aebe-21d16e031e60\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.788968 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtl4\" (UniqueName: \"kubernetes.io/projected/cb155748-60ea-4ba4-8add-144027528478-kube-api-access-djtl4\") pod \"nmstate-handler-wbmlw\" (UID: \"cb155748-60ea-4ba4-8add-144027528478\") " pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.789701 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cb7x\" (UniqueName: \"kubernetes.io/projected/61e185b2-1b85-42f6-be2f-7e2d9d698453-kube-api-access-4cb7x\") pod \"nmstate-metrics-54757c584b-2vr9x\" (UID: \"61e185b2-1b85-42f6-be2f-7e2d9d698453\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.824130 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.847832 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.860560 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.861942 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/daec0120-0078-4bfc-a484-c8e25bce75cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.862082 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srjn\" (UniqueName: \"kubernetes.io/projected/daec0120-0078-4bfc-a484-c8e25bce75cc-kube-api-access-2srjn\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.862180 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/daec0120-0078-4bfc-a484-c8e25bce75cc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: E0122 15:37:07.862555 4825 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 22 15:37:07 crc kubenswrapper[4825]: E0122 15:37:07.862628 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daec0120-0078-4bfc-a484-c8e25bce75cc-plugin-serving-cert podName:daec0120-0078-4bfc-a484-c8e25bce75cc nodeName:}" failed. No retries permitted until 2026-01-22 15:37:08.362613246 +0000 UTC m=+775.124140156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/daec0120-0078-4bfc-a484-c8e25bce75cc-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-fxcqq" (UID: "daec0120-0078-4bfc-a484-c8e25bce75cc") : secret "plugin-serving-cert" not found Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.863204 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/daec0120-0078-4bfc-a484-c8e25bce75cc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.881968 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srjn\" (UniqueName: \"kubernetes.io/projected/daec0120-0078-4bfc-a484-c8e25bce75cc-kube-api-access-2srjn\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.894705 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wbmlw" event={"ID":"cb155748-60ea-4ba4-8add-144027528478","Type":"ContainerStarted","Data":"2118e25990bc70f4c4f4e7b70ee7101aaa78752e6d99596581f5cb413210bd3e"} Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.965888 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c45bdf44d-8g65s"] Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.966795 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:07 crc kubenswrapper[4825]: I0122 15:37:07.983872 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c45bdf44d-8g65s"] Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.127258 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2vr9x"] Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169440 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6gg\" (UniqueName: \"kubernetes.io/projected/00839a47-7566-43b5-9a7b-794f7ae631a2-kube-api-access-vj6gg\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169505 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-console-config\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169522 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00839a47-7566-43b5-9a7b-794f7ae631a2-console-oauth-config\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169540 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-oauth-serving-cert\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169737 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-trusted-ca-bundle\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169876 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-service-ca\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.169929 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00839a47-7566-43b5-9a7b-794f7ae631a2-console-serving-cert\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: W0122 15:37:08.173372 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61e185b2_1b85_42f6_be2f_7e2d9d698453.slice/crio-b6228731052b15d316090b38eb5818009af499dc111ab36421dc21ff3f9f3cdd WatchSource:0}: Error finding container b6228731052b15d316090b38eb5818009af499dc111ab36421dc21ff3f9f3cdd: Status 404 returned error can't find the container with id b6228731052b15d316090b38eb5818009af499dc111ab36421dc21ff3f9f3cdd Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.271357 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-trusted-ca-bundle\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.273029 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-trusted-ca-bundle\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.273826 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-service-ca\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.273221 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-service-ca\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.274906 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00839a47-7566-43b5-9a7b-794f7ae631a2-console-serving-cert\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.275038 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj6gg\" (UniqueName: \"kubernetes.io/projected/00839a47-7566-43b5-9a7b-794f7ae631a2-kube-api-access-vj6gg\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.275183 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-console-config\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.275266 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00839a47-7566-43b5-9a7b-794f7ae631a2-console-oauth-config\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.275346 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-oauth-serving-cert\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.277524 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-oauth-serving-cert\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.277527 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00839a47-7566-43b5-9a7b-794f7ae631a2-console-config\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.283717 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00839a47-7566-43b5-9a7b-794f7ae631a2-console-oauth-config\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.290811 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00839a47-7566-43b5-9a7b-794f7ae631a2-console-serving-cert\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.297356 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj6gg\" (UniqueName: \"kubernetes.io/projected/00839a47-7566-43b5-9a7b-794f7ae631a2-kube-api-access-vj6gg\") pod \"console-5c45bdf44d-8g65s\" (UID: \"00839a47-7566-43b5-9a7b-794f7ae631a2\") " pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.317329 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.376543 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/daec0120-0078-4bfc-a484-c8e25bce75cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.379623 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/daec0120-0078-4bfc-a484-c8e25bce75cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fxcqq\" (UID: \"daec0120-0078-4bfc-a484-c8e25bce75cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.563374 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc"] Jan 22 15:37:08 crc kubenswrapper[4825]: W0122 15:37:08.568281 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb73c1aa_4e2b_40fd_aebe_21d16e031e60.slice/crio-805716fdbbe4bc73f8a730b43dd3a4ccce715b4e24e56d2e83ef58d4d2d25496 WatchSource:0}: Error finding container 805716fdbbe4bc73f8a730b43dd3a4ccce715b4e24e56d2e83ef58d4d2d25496: Status 404 returned error can't find the container with id 805716fdbbe4bc73f8a730b43dd3a4ccce715b4e24e56d2e83ef58d4d2d25496 Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.591597 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.821760 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c45bdf44d-8g65s"] Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.857800 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq"] Jan 22 15:37:08 crc kubenswrapper[4825]: W0122 15:37:08.884180 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaec0120_0078_4bfc_a484_c8e25bce75cc.slice/crio-6900bf94b4139f84705fcbe70c01147384971300700bfaf1a100b368a2e9db98 WatchSource:0}: Error finding container 6900bf94b4139f84705fcbe70c01147384971300700bfaf1a100b368a2e9db98: Status 404 returned error can't find the container with id 6900bf94b4139f84705fcbe70c01147384971300700bfaf1a100b368a2e9db98 Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.927802 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" event={"ID":"61e185b2-1b85-42f6-be2f-7e2d9d698453","Type":"ContainerStarted","Data":"b6228731052b15d316090b38eb5818009af499dc111ab36421dc21ff3f9f3cdd"} Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.930277 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"7c411fc0ec7bfe151046cb879197a0f2e7e0a4bd2d89c00b4f28d59849883ce9"} Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.932621 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c45bdf44d-8g65s" event={"ID":"00839a47-7566-43b5-9a7b-794f7ae631a2","Type":"ContainerStarted","Data":"d6c506216a0c9be1979e91ced962c828d26a1a207852ddbfcfa6cf23f990de6b"} Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.933547 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" event={"ID":"daec0120-0078-4bfc-a484-c8e25bce75cc","Type":"ContainerStarted","Data":"6900bf94b4139f84705fcbe70c01147384971300700bfaf1a100b368a2e9db98"} Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.934314 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" event={"ID":"cb73c1aa-4e2b-40fd-aebe-21d16e031e60","Type":"ContainerStarted","Data":"805716fdbbe4bc73f8a730b43dd3a4ccce715b4e24e56d2e83ef58d4d2d25496"} Jan 22 15:37:08 crc kubenswrapper[4825]: I0122 15:37:08.994374 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:37:09 crc kubenswrapper[4825]: I0122 15:37:09.037777 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:37:09 crc kubenswrapper[4825]: I0122 15:37:09.232303 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lnhgp"] Jan 22 15:37:09 crc kubenswrapper[4825]: I0122 15:37:09.943918 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c45bdf44d-8g65s" event={"ID":"00839a47-7566-43b5-9a7b-794f7ae631a2","Type":"ContainerStarted","Data":"92c2ff653b47b679437c66d1cbc90ce2506b1b2228f4551a8e5e186a32ba3b81"} Jan 22 15:37:09 crc kubenswrapper[4825]: I0122 15:37:09.971156 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c45bdf44d-8g65s" podStartSLOduration=2.9711360449999997 podStartE2EDuration="2.971136045s" podCreationTimestamp="2026-01-22 15:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:37:09.964149953 +0000 UTC m=+776.725676863" watchObservedRunningTime="2026-01-22 15:37:09.971136045 +0000 UTC m=+776.732662955" Jan 22 15:37:10 crc kubenswrapper[4825]: I0122 15:37:10.951087 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lnhgp" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="registry-server" containerID="cri-o://f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18" gracePeriod=2 Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.749215 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.927266 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-catalog-content\") pod \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.927331 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-utilities\") pod \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.927438 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvjtc\" (UniqueName: \"kubernetes.io/projected/0f85a671-e159-4cd7-82d2-f31fd4557ee6-kube-api-access-kvjtc\") pod \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\" (UID: \"0f85a671-e159-4cd7-82d2-f31fd4557ee6\") " Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.928398 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-utilities" (OuterVolumeSpecName: "utilities") pod "0f85a671-e159-4cd7-82d2-f31fd4557ee6" (UID: "0f85a671-e159-4cd7-82d2-f31fd4557ee6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.932470 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f85a671-e159-4cd7-82d2-f31fd4557ee6-kube-api-access-kvjtc" (OuterVolumeSpecName: "kube-api-access-kvjtc") pod "0f85a671-e159-4cd7-82d2-f31fd4557ee6" (UID: "0f85a671-e159-4cd7-82d2-f31fd4557ee6"). InnerVolumeSpecName "kube-api-access-kvjtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.958768 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wbmlw" event={"ID":"cb155748-60ea-4ba4-8add-144027528478","Type":"ContainerStarted","Data":"e7df6fcf3fd52d857afc478a25072f3f570ec350d8416dee17a2aee27b336c3b"} Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.961332 4825 generic.go:334] "Generic (PLEG): container finished" podID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerID="f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18" exitCode=0 Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.961360 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerDied","Data":"f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18"} Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.961447 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnhgp" event={"ID":"0f85a671-e159-4cd7-82d2-f31fd4557ee6","Type":"ContainerDied","Data":"e03816a09ef5029b4d76d50dadf4376efe620d3f63a8b8b39812306e30c4040f"} Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.961473 4825 scope.go:117] "RemoveContainer" containerID="f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.961734 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnhgp" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.963623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" event={"ID":"cb73c1aa-4e2b-40fd-aebe-21d16e031e60","Type":"ContainerStarted","Data":"4229e6861e43060d97673ac682cef576e5093ce3f5a31fdd54bd7010cce11597"} Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.963802 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.965440 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" event={"ID":"61e185b2-1b85-42f6-be2f-7e2d9d698453","Type":"ContainerStarted","Data":"b91f0128180c80018aad4332cdad215cf4084d97c2ba7874c5ff2b4da2b887a7"} Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.975841 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wbmlw" podStartSLOduration=1.3042690860000001 podStartE2EDuration="4.975828554s" podCreationTimestamp="2026-01-22 15:37:07 +0000 UTC" firstStartedPulling="2026-01-22 15:37:07.891212632 +0000 UTC m=+774.652739542" lastFinishedPulling="2026-01-22 15:37:11.5627721 +0000 UTC m=+778.324299010" observedRunningTime="2026-01-22 15:37:11.974077993 +0000 UTC m=+778.735604903" watchObservedRunningTime="2026-01-22 15:37:11.975828554 +0000 UTC m=+778.737355464" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.982017 4825 scope.go:117] "RemoveContainer" containerID="477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230" Jan 22 15:37:11 crc kubenswrapper[4825]: I0122 15:37:11.999826 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" podStartSLOduration=2.010640095 podStartE2EDuration="4.999807407s" podCreationTimestamp="2026-01-22 15:37:07 +0000 UTC" firstStartedPulling="2026-01-22 15:37:08.570396645 +0000 UTC m=+775.331923555" lastFinishedPulling="2026-01-22 15:37:11.559563957 +0000 UTC m=+778.321090867" observedRunningTime="2026-01-22 15:37:11.998146389 +0000 UTC m=+778.759673299" watchObservedRunningTime="2026-01-22 15:37:11.999807407 +0000 UTC m=+778.761334317" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.015037 4825 scope.go:117] "RemoveContainer" containerID="2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.029569 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvjtc\" (UniqueName: \"kubernetes.io/projected/0f85a671-e159-4cd7-82d2-f31fd4557ee6-kube-api-access-kvjtc\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.029615 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.034212 4825 scope.go:117] "RemoveContainer" containerID="f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18" Jan 22 15:37:12 crc kubenswrapper[4825]: E0122 15:37:12.034616 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18\": container with ID starting with f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18 not found: ID does not exist" containerID="f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.034656 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18"} err="failed to get container status \"f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18\": rpc error: code = NotFound desc = could not find container \"f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18\": container with ID starting with f27e190fa95ae2d8f1c6173c3b4f6bad5d4a9571156448ba577c3d0a6f8a9e18 not found: ID does not exist" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.034686 4825 scope.go:117] "RemoveContainer" containerID="477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230" Jan 22 15:37:12 crc kubenswrapper[4825]: E0122 15:37:12.035029 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230\": container with ID starting with 477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230 not found: ID does not exist" containerID="477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.035091 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230"} err="failed to get container status \"477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230\": rpc error: code = NotFound desc = could not find container \"477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230\": container with ID starting with 477f42fe801d2a5dbbb9979d5edee6d0eaede42e12c2dc678a116eebd3a7b230 not found: ID does not exist" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.035113 4825 scope.go:117] "RemoveContainer" containerID="2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95" Jan 22 15:37:12 crc kubenswrapper[4825]: E0122 15:37:12.036117 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95\": container with ID starting with 2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95 not found: ID does not exist" containerID="2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.036143 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95"} err="failed to get container status \"2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95\": rpc error: code = NotFound desc = could not find container \"2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95\": container with ID starting with 2b618d985104bc8ebc2301334c78eff6ddc0d1ae74a3542a65a80a4568e16d95 not found: ID does not exist" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.107899 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f85a671-e159-4cd7-82d2-f31fd4557ee6" (UID: "0f85a671-e159-4cd7-82d2-f31fd4557ee6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.131148 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f85a671-e159-4cd7-82d2-f31fd4557ee6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.311148 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lnhgp"] Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.317729 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lnhgp"] Jan 22 15:37:12 crc kubenswrapper[4825]: I0122 15:37:12.848114 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:13 crc kubenswrapper[4825]: I0122 15:37:13.526308 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" path="/var/lib/kubelet/pods/0f85a671-e159-4cd7-82d2-f31fd4557ee6/volumes" Jan 22 15:37:14 crc kubenswrapper[4825]: I0122 15:37:14.991218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" event={"ID":"61e185b2-1b85-42f6-be2f-7e2d9d698453","Type":"ContainerStarted","Data":"3a7c3d58a33cd17ce728bacd20fb8d40f001774b9dedeff30b1a4594f4d86c65"} Jan 22 15:37:14 crc kubenswrapper[4825]: I0122 15:37:14.995110 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" event={"ID":"daec0120-0078-4bfc-a484-c8e25bce75cc","Type":"ContainerStarted","Data":"b6ba062743939905faa7101b4568a4afa3b1a4815073cb9ce2e4549ff639f68a"} Jan 22 15:37:15 crc kubenswrapper[4825]: I0122 15:37:15.027339 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-2vr9x" podStartSLOduration=2.072588923 podStartE2EDuration="8.027306416s" podCreationTimestamp="2026-01-22 15:37:07 +0000 UTC" firstStartedPulling="2026-01-22 15:37:08.177585436 +0000 UTC m=+774.939112346" lastFinishedPulling="2026-01-22 15:37:14.132302929 +0000 UTC m=+780.893829839" observedRunningTime="2026-01-22 15:37:15.021690004 +0000 UTC m=+781.783216954" watchObservedRunningTime="2026-01-22 15:37:15.027306416 +0000 UTC m=+781.788833366" Jan 22 15:37:15 crc kubenswrapper[4825]: I0122 15:37:15.052515 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fxcqq" podStartSLOduration=2.8260825240000003 podStartE2EDuration="8.052483264s" podCreationTimestamp="2026-01-22 15:37:07 +0000 UTC" firstStartedPulling="2026-01-22 15:37:08.887158387 +0000 UTC m=+775.648685297" lastFinishedPulling="2026-01-22 15:37:14.113559127 +0000 UTC m=+780.875086037" observedRunningTime="2026-01-22 15:37:15.051544917 +0000 UTC m=+781.813071867" watchObservedRunningTime="2026-01-22 15:37:15.052483264 +0000 UTC m=+781.814010214" Jan 22 15:37:17 crc kubenswrapper[4825]: I0122 15:37:17.883001 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wbmlw" Jan 22 15:37:18 crc kubenswrapper[4825]: I0122 15:37:18.317875 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:18 crc kubenswrapper[4825]: I0122 15:37:18.318140 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:18 crc kubenswrapper[4825]: I0122 15:37:18.325028 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:19 crc kubenswrapper[4825]: I0122 15:37:19.048506 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c45bdf44d-8g65s" Jan 22 15:37:19 crc kubenswrapper[4825]: I0122 15:37:19.146950 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qvds8"] Jan 22 15:37:27 crc kubenswrapper[4825]: I0122 15:37:27.868630 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nncrc" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.136041 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt"] Jan 22 15:37:44 crc kubenswrapper[4825]: E0122 15:37:44.136756 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="extract-utilities" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.136767 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="extract-utilities" Jan 22 15:37:44 crc kubenswrapper[4825]: E0122 15:37:44.136777 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="registry-server" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.136785 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="registry-server" Jan 22 15:37:44 crc kubenswrapper[4825]: E0122 15:37:44.136809 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="extract-content" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.136818 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="extract-content" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.136948 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f85a671-e159-4cd7-82d2-f31fd4557ee6" containerName="registry-server" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.138060 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.139853 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.149252 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt"] Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.178144 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.178184 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcvkn\" (UniqueName: \"kubernetes.io/projected/20a233ad-0f91-4e20-806f-84cdef936bc8-kube-api-access-zcvkn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.178211 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.186800 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qvds8" podUID="81d43c37-4152-47d0-be95-a390693902e9" containerName="console" containerID="cri-o://be6d79d81a50f8b29a2da1197b70e3f4e89492322f161e08524482d5a188f803" gracePeriod=15 Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.280118 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.280492 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcvkn\" (UniqueName: \"kubernetes.io/projected/20a233ad-0f91-4e20-806f-84cdef936bc8-kube-api-access-zcvkn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.280539 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.281149 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.281245 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.302879 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcvkn\" (UniqueName: \"kubernetes.io/projected/20a233ad-0f91-4e20-806f-84cdef936bc8-kube-api-access-zcvkn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.457557 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:44 crc kubenswrapper[4825]: I0122 15:37:44.888184 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt"] Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.255265 4825 generic.go:334] "Generic (PLEG): container finished" podID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerID="cb9dd834ce1b94db7ae97b7a46cf346eb3a1d2fa5efa2d215f235edcad76a6ae" exitCode=0 Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.255323 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" event={"ID":"20a233ad-0f91-4e20-806f-84cdef936bc8","Type":"ContainerDied","Data":"cb9dd834ce1b94db7ae97b7a46cf346eb3a1d2fa5efa2d215f235edcad76a6ae"} Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.255389 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" event={"ID":"20a233ad-0f91-4e20-806f-84cdef936bc8","Type":"ContainerStarted","Data":"8810ad8f88d4da4521569f5b7638d8b62166b7b3cf98ea6a31bfabc6a206094d"} Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.259240 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qvds8_81d43c37-4152-47d0-be95-a390693902e9/console/0.log" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.259331 4825 generic.go:334] "Generic (PLEG): container finished" podID="81d43c37-4152-47d0-be95-a390693902e9" containerID="be6d79d81a50f8b29a2da1197b70e3f4e89492322f161e08524482d5a188f803" exitCode=2 Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.259372 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qvds8" event={"ID":"81d43c37-4152-47d0-be95-a390693902e9","Type":"ContainerDied","Data":"be6d79d81a50f8b29a2da1197b70e3f4e89492322f161e08524482d5a188f803"} Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.699032 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qvds8_81d43c37-4152-47d0-be95-a390693902e9/console/0.log" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.699471 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803092 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-oauth-config\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803161 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-trusted-ca-bundle\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803200 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-oauth-serving-cert\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803305 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8npv4\" (UniqueName: \"kubernetes.io/projected/81d43c37-4152-47d0-be95-a390693902e9-kube-api-access-8npv4\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803351 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-serving-cert\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803387 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-service-ca\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.803418 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-console-config\") pod \"81d43c37-4152-47d0-be95-a390693902e9\" (UID: \"81d43c37-4152-47d0-be95-a390693902e9\") " Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.804262 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.804287 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-console-config" (OuterVolumeSpecName: "console-config") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.804645 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-service-ca" (OuterVolumeSpecName: "service-ca") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.805071 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.809114 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d43c37-4152-47d0-be95-a390693902e9-kube-api-access-8npv4" (OuterVolumeSpecName: "kube-api-access-8npv4") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "kube-api-access-8npv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.809393 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.810447 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "81d43c37-4152-47d0-be95-a390693902e9" (UID: "81d43c37-4152-47d0-be95-a390693902e9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904722 4825 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904760 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904772 4825 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904783 4825 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81d43c37-4152-47d0-be95-a390693902e9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904796 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904810 4825 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81d43c37-4152-47d0-be95-a390693902e9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:45 crc kubenswrapper[4825]: I0122 15:37:45.904822 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8npv4\" (UniqueName: \"kubernetes.io/projected/81d43c37-4152-47d0-be95-a390693902e9-kube-api-access-8npv4\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:46 crc kubenswrapper[4825]: I0122 15:37:46.268519 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qvds8_81d43c37-4152-47d0-be95-a390693902e9/console/0.log" Jan 22 15:37:46 crc kubenswrapper[4825]: I0122 15:37:46.268576 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qvds8" event={"ID":"81d43c37-4152-47d0-be95-a390693902e9","Type":"ContainerDied","Data":"6e1c36df9a34e9c31e62abd8c23652ec080349de7d0f90c3375bd5127ceb521f"} Jan 22 15:37:46 crc kubenswrapper[4825]: I0122 15:37:46.268610 4825 scope.go:117] "RemoveContainer" containerID="be6d79d81a50f8b29a2da1197b70e3f4e89492322f161e08524482d5a188f803" Jan 22 15:37:46 crc kubenswrapper[4825]: I0122 15:37:46.268671 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qvds8" Jan 22 15:37:46 crc kubenswrapper[4825]: I0122 15:37:46.327121 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qvds8"] Jan 22 15:37:46 crc kubenswrapper[4825]: I0122 15:37:46.332204 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qvds8"] Jan 22 15:37:47 crc kubenswrapper[4825]: I0122 15:37:47.525572 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d43c37-4152-47d0-be95-a390693902e9" path="/var/lib/kubelet/pods/81d43c37-4152-47d0-be95-a390693902e9/volumes" Jan 22 15:37:48 crc kubenswrapper[4825]: I0122 15:37:48.288511 4825 generic.go:334] "Generic (PLEG): container finished" podID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerID="39c5a8c9671a4605c92ba54f60833d4df76419c765f41962ad5c64cf0db4a50f" exitCode=0 Jan 22 15:37:48 crc kubenswrapper[4825]: I0122 15:37:48.288625 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" event={"ID":"20a233ad-0f91-4e20-806f-84cdef936bc8","Type":"ContainerDied","Data":"39c5a8c9671a4605c92ba54f60833d4df76419c765f41962ad5c64cf0db4a50f"} Jan 22 15:37:49 crc kubenswrapper[4825]: I0122 15:37:49.297795 4825 generic.go:334] "Generic (PLEG): container finished" podID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerID="02290fb6237015299999a4bda2c619726eb894bfeb2437d9b90212ba12a323c8" exitCode=0 Jan 22 15:37:49 crc kubenswrapper[4825]: I0122 15:37:49.298137 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" event={"ID":"20a233ad-0f91-4e20-806f-84cdef936bc8","Type":"ContainerDied","Data":"02290fb6237015299999a4bda2c619726eb894bfeb2437d9b90212ba12a323c8"} Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.750533 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.812257 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-bundle\") pod \"20a233ad-0f91-4e20-806f-84cdef936bc8\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.812391 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-util\") pod \"20a233ad-0f91-4e20-806f-84cdef936bc8\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.812467 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcvkn\" (UniqueName: \"kubernetes.io/projected/20a233ad-0f91-4e20-806f-84cdef936bc8-kube-api-access-zcvkn\") pod \"20a233ad-0f91-4e20-806f-84cdef936bc8\" (UID: \"20a233ad-0f91-4e20-806f-84cdef936bc8\") " Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.814082 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-bundle" (OuterVolumeSpecName: "bundle") pod "20a233ad-0f91-4e20-806f-84cdef936bc8" (UID: "20a233ad-0f91-4e20-806f-84cdef936bc8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.824312 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-util" (OuterVolumeSpecName: "util") pod "20a233ad-0f91-4e20-806f-84cdef936bc8" (UID: "20a233ad-0f91-4e20-806f-84cdef936bc8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.856336 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a233ad-0f91-4e20-806f-84cdef936bc8-kube-api-access-zcvkn" (OuterVolumeSpecName: "kube-api-access-zcvkn") pod "20a233ad-0f91-4e20-806f-84cdef936bc8" (UID: "20a233ad-0f91-4e20-806f-84cdef936bc8"). InnerVolumeSpecName "kube-api-access-zcvkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.914402 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-util\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.914454 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcvkn\" (UniqueName: \"kubernetes.io/projected/20a233ad-0f91-4e20-806f-84cdef936bc8-kube-api-access-zcvkn\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:50 crc kubenswrapper[4825]: I0122 15:37:50.914475 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20a233ad-0f91-4e20-806f-84cdef936bc8-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:37:51 crc kubenswrapper[4825]: I0122 15:37:51.318295 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" event={"ID":"20a233ad-0f91-4e20-806f-84cdef936bc8","Type":"ContainerDied","Data":"8810ad8f88d4da4521569f5b7638d8b62166b7b3cf98ea6a31bfabc6a206094d"} Jan 22 15:37:51 crc kubenswrapper[4825]: I0122 15:37:51.318362 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8810ad8f88d4da4521569f5b7638d8b62166b7b3cf98ea6a31bfabc6a206094d" Jan 22 15:37:51 crc kubenswrapper[4825]: I0122 15:37:51.318605 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.581583 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7"] Jan 22 15:38:01 crc kubenswrapper[4825]: E0122 15:38:01.582459 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="extract" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.582474 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="extract" Jan 22 15:38:01 crc kubenswrapper[4825]: E0122 15:38:01.582490 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="pull" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.582497 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="pull" Jan 22 15:38:01 crc kubenswrapper[4825]: E0122 15:38:01.582511 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d43c37-4152-47d0-be95-a390693902e9" containerName="console" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.582518 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d43c37-4152-47d0-be95-a390693902e9" containerName="console" Jan 22 15:38:01 crc kubenswrapper[4825]: E0122 15:38:01.582531 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="util" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.582540 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="util" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.582702 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a233ad-0f91-4e20-806f-84cdef936bc8" containerName="extract" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.582720 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d43c37-4152-47d0-be95-a390693902e9" containerName="console" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.583293 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.587075 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.587256 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nrp47" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.587351 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.587421 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.588302 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.595901 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7"] Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.626086 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxnm9\" (UniqueName: \"kubernetes.io/projected/6f10e107-2124-422f-9201-d516620b0919-kube-api-access-vxnm9\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.626224 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f10e107-2124-422f-9201-d516620b0919-apiservice-cert\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.626351 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f10e107-2124-422f-9201-d516620b0919-webhook-cert\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.728199 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f10e107-2124-422f-9201-d516620b0919-apiservice-cert\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.728264 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f10e107-2124-422f-9201-d516620b0919-webhook-cert\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.728299 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxnm9\" (UniqueName: \"kubernetes.io/projected/6f10e107-2124-422f-9201-d516620b0919-kube-api-access-vxnm9\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.733867 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f10e107-2124-422f-9201-d516620b0919-apiservice-cert\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.733947 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f10e107-2124-422f-9201-d516620b0919-webhook-cert\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.748449 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxnm9\" (UniqueName: \"kubernetes.io/projected/6f10e107-2124-422f-9201-d516620b0919-kube-api-access-vxnm9\") pod \"metallb-operator-controller-manager-59f887c8c5-jc6l7\" (UID: \"6f10e107-2124-422f-9201-d516620b0919\") " pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.830022 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l"] Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.831264 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.832905 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.833189 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.833517 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ngwwj" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.847456 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l"] Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.898892 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.930297 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrbd\" (UniqueName: \"kubernetes.io/projected/17f8943a-3372-4216-aa96-9e61c5e8110c-kube-api-access-jwrbd\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.930401 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f8943a-3372-4216-aa96-9e61c5e8110c-webhook-cert\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:01 crc kubenswrapper[4825]: I0122 15:38:01.930568 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f8943a-3372-4216-aa96-9e61c5e8110c-apiservice-cert\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.060176 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f8943a-3372-4216-aa96-9e61c5e8110c-webhook-cert\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.060255 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f8943a-3372-4216-aa96-9e61c5e8110c-apiservice-cert\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.060317 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrbd\" (UniqueName: \"kubernetes.io/projected/17f8943a-3372-4216-aa96-9e61c5e8110c-kube-api-access-jwrbd\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.065725 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17f8943a-3372-4216-aa96-9e61c5e8110c-apiservice-cert\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.069997 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17f8943a-3372-4216-aa96-9e61c5e8110c-webhook-cert\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.086157 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrbd\" (UniqueName: \"kubernetes.io/projected/17f8943a-3372-4216-aa96-9e61c5e8110c-kube-api-access-jwrbd\") pod \"metallb-operator-webhook-server-f8d94c798-ms78l\" (UID: \"17f8943a-3372-4216-aa96-9e61c5e8110c\") " pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.146547 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.407294 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7"] Jan 22 15:38:02 crc kubenswrapper[4825]: W0122 15:38:02.409964 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f10e107_2124_422f_9201_d516620b0919.slice/crio-1ead301dd354463b46a60af0255de50ebb0f18aef333eef7202a764d4b00b124 WatchSource:0}: Error finding container 1ead301dd354463b46a60af0255de50ebb0f18aef333eef7202a764d4b00b124: Status 404 returned error can't find the container with id 1ead301dd354463b46a60af0255de50ebb0f18aef333eef7202a764d4b00b124 Jan 22 15:38:02 crc kubenswrapper[4825]: I0122 15:38:02.609179 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l"] Jan 22 15:38:02 crc kubenswrapper[4825]: W0122 15:38:02.624278 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17f8943a_3372_4216_aa96_9e61c5e8110c.slice/crio-5cd9fcd957fcd87cbfc481c325cf8b34abcfab701cc334fa8047bf85803a3f72 WatchSource:0}: Error finding container 5cd9fcd957fcd87cbfc481c325cf8b34abcfab701cc334fa8047bf85803a3f72: Status 404 returned error can't find the container with id 5cd9fcd957fcd87cbfc481c325cf8b34abcfab701cc334fa8047bf85803a3f72 Jan 22 15:38:03 crc kubenswrapper[4825]: I0122 15:38:03.414627 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" event={"ID":"6f10e107-2124-422f-9201-d516620b0919","Type":"ContainerStarted","Data":"1ead301dd354463b46a60af0255de50ebb0f18aef333eef7202a764d4b00b124"} Jan 22 15:38:03 crc kubenswrapper[4825]: I0122 15:38:03.416667 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" event={"ID":"17f8943a-3372-4216-aa96-9e61c5e8110c","Type":"ContainerStarted","Data":"5cd9fcd957fcd87cbfc481c325cf8b34abcfab701cc334fa8047bf85803a3f72"} Jan 22 15:38:08 crc kubenswrapper[4825]: I0122 15:38:08.466145 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" event={"ID":"6f10e107-2124-422f-9201-d516620b0919","Type":"ContainerStarted","Data":"90fd6c204089bdec04ee5d00f831f6499fea91e69a3c14f8b192c5c8a2f85fa2"} Jan 22 15:38:08 crc kubenswrapper[4825]: I0122 15:38:08.466554 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:08 crc kubenswrapper[4825]: I0122 15:38:08.468115 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" event={"ID":"17f8943a-3372-4216-aa96-9e61c5e8110c","Type":"ContainerStarted","Data":"233f7a1991ba18a3569659ccd531abc9c3cfe5f594ed2390e9695fcf9ae1e1c4"} Jan 22 15:38:08 crc kubenswrapper[4825]: I0122 15:38:08.468256 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:08 crc kubenswrapper[4825]: I0122 15:38:08.494862 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" podStartSLOduration=2.1256493340000002 podStartE2EDuration="7.494842288s" podCreationTimestamp="2026-01-22 15:38:01 +0000 UTC" firstStartedPulling="2026-01-22 15:38:02.412408358 +0000 UTC m=+829.173935268" lastFinishedPulling="2026-01-22 15:38:07.781601312 +0000 UTC m=+834.543128222" observedRunningTime="2026-01-22 15:38:08.487490349 +0000 UTC m=+835.249017259" watchObservedRunningTime="2026-01-22 15:38:08.494842288 +0000 UTC m=+835.256369188" Jan 22 15:38:08 crc kubenswrapper[4825]: I0122 15:38:08.541473 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" podStartSLOduration=2.352450928 podStartE2EDuration="7.541447144s" podCreationTimestamp="2026-01-22 15:38:01 +0000 UTC" firstStartedPulling="2026-01-22 15:38:02.629106935 +0000 UTC m=+829.390633845" lastFinishedPulling="2026-01-22 15:38:07.818103151 +0000 UTC m=+834.579630061" observedRunningTime="2026-01-22 15:38:08.525970944 +0000 UTC m=+835.287497854" watchObservedRunningTime="2026-01-22 15:38:08.541447144 +0000 UTC m=+835.302974054" Jan 22 15:38:22 crc kubenswrapper[4825]: I0122 15:38:22.153263 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-f8d94c798-ms78l" Jan 22 15:38:41 crc kubenswrapper[4825]: I0122 15:38:41.902719 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59f887c8c5-jc6l7" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.723802 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kt4gs"] Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.727519 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.729656 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c"] Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.730468 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.731073 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.731307 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.731394 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-np7vp" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.731651 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733608 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0773281d-f402-46e0-ae19-32a82824046b-metrics-certs\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733661 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2t8\" (UniqueName: \"kubernetes.io/projected/148811c4-5c9f-4c58-86f6-df32772b3fb9-kube-api-access-cl2t8\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733722 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-frr-sockets\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733741 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-reloader\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733767 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148811c4-5c9f-4c58-86f6-df32772b3fb9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733790 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-metrics\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733807 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnn26\" (UniqueName: \"kubernetes.io/projected/0773281d-f402-46e0-ae19-32a82824046b-kube-api-access-vnn26\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733844 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-frr-conf\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.733865 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0773281d-f402-46e0-ae19-32a82824046b-frr-startup\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.742190 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c"] Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.830236 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lk5df"] Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.831484 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lk5df" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.834999 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-frr-sockets\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835048 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-reloader\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835073 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148811c4-5c9f-4c58-86f6-df32772b3fb9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835106 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-metrics\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835129 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnn26\" (UniqueName: \"kubernetes.io/projected/0773281d-f402-46e0-ae19-32a82824046b-kube-api-access-vnn26\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835165 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-frr-conf\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835192 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0773281d-f402-46e0-ae19-32a82824046b-frr-startup\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835221 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0773281d-f402-46e0-ae19-32a82824046b-metrics-certs\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.835258 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2t8\" (UniqueName: \"kubernetes.io/projected/148811c4-5c9f-4c58-86f6-df32772b3fb9-kube-api-access-cl2t8\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.836057 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-frr-sockets\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.836338 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-reloader\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: E0122 15:38:42.836424 4825 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 15:38:42 crc kubenswrapper[4825]: E0122 15:38:42.836474 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/148811c4-5c9f-4c58-86f6-df32772b3fb9-cert podName:148811c4-5c9f-4c58-86f6-df32772b3fb9 nodeName:}" failed. No retries permitted until 2026-01-22 15:38:43.336457124 +0000 UTC m=+870.097984034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/148811c4-5c9f-4c58-86f6-df32772b3fb9-cert") pod "frr-k8s-webhook-server-7df86c4f6c-zk72c" (UID: "148811c4-5c9f-4c58-86f6-df32772b3fb9") : secret "frr-k8s-webhook-server-cert" not found Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.836894 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-metrics\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.837357 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0773281d-f402-46e0-ae19-32a82824046b-frr-conf\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.838397 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0773281d-f402-46e0-ae19-32a82824046b-frr-startup\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.839599 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.839640 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.839670 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.840054 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6gtcs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.848521 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-t622w"] Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.849594 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0773281d-f402-46e0-ae19-32a82824046b-metrics-certs\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.851600 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.862686 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.863520 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnn26\" (UniqueName: \"kubernetes.io/projected/0773281d-f402-46e0-ae19-32a82824046b-kube-api-access-vnn26\") pod \"frr-k8s-kt4gs\" (UID: \"0773281d-f402-46e0-ae19-32a82824046b\") " pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.876092 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-t622w"] Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.878129 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2t8\" (UniqueName: \"kubernetes.io/projected/148811c4-5c9f-4c58-86f6-df32772b3fb9-kube-api-access-cl2t8\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.936492 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfjxt\" (UniqueName: \"kubernetes.io/projected/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-kube-api-access-sfjxt\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.936560 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-metallb-excludel2\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.936579 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:42 crc kubenswrapper[4825]: I0122 15:38:42.936701 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-metrics-certs\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038131 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bed4152-970e-428e-a8bd-21fe165bde92-cert\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038191 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxv5c\" (UniqueName: \"kubernetes.io/projected/7bed4152-970e-428e-a8bd-21fe165bde92-kube-api-access-wxv5c\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038280 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfjxt\" (UniqueName: \"kubernetes.io/projected/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-kube-api-access-sfjxt\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038332 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-metallb-excludel2\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038364 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bed4152-970e-428e-a8bd-21fe165bde92-metrics-certs\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038404 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038441 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-metrics-certs\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: E0122 15:38:43.038631 4825 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 15:38:43 crc kubenswrapper[4825]: E0122 15:38:43.038688 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist podName:6b567ea4-6df9-4c62-8caf-c8bb77aae0b7 nodeName:}" failed. No retries permitted until 2026-01-22 15:38:43.538669258 +0000 UTC m=+870.300196188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist") pod "speaker-lk5df" (UID: "6b567ea4-6df9-4c62-8caf-c8bb77aae0b7") : secret "metallb-memberlist" not found Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.038998 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-metallb-excludel2\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.042576 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-metrics-certs\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.047311 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.060724 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfjxt\" (UniqueName: \"kubernetes.io/projected/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-kube-api-access-sfjxt\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.139922 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bed4152-970e-428e-a8bd-21fe165bde92-cert\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.140016 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxv5c\" (UniqueName: \"kubernetes.io/projected/7bed4152-970e-428e-a8bd-21fe165bde92-kube-api-access-wxv5c\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.140100 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bed4152-970e-428e-a8bd-21fe165bde92-metrics-certs\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.142430 4825 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.143958 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bed4152-970e-428e-a8bd-21fe165bde92-metrics-certs\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.154515 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bed4152-970e-428e-a8bd-21fe165bde92-cert\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.163312 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxv5c\" (UniqueName: \"kubernetes.io/projected/7bed4152-970e-428e-a8bd-21fe165bde92-kube-api-access-wxv5c\") pod \"controller-6968d8fdc4-t622w\" (UID: \"7bed4152-970e-428e-a8bd-21fe165bde92\") " pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.211705 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.345323 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148811c4-5c9f-4c58-86f6-df32772b3fb9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.351455 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/148811c4-5c9f-4c58-86f6-df32772b3fb9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zk72c\" (UID: \"148811c4-5c9f-4c58-86f6-df32772b3fb9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.356827 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.676763 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:43 crc kubenswrapper[4825]: E0122 15:38:43.676954 4825 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 15:38:43 crc kubenswrapper[4825]: E0122 15:38:43.677026 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist podName:6b567ea4-6df9-4c62-8caf-c8bb77aae0b7 nodeName:}" failed. No retries permitted until 2026-01-22 15:38:44.677008203 +0000 UTC m=+871.438535113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist") pod "speaker-lk5df" (UID: "6b567ea4-6df9-4c62-8caf-c8bb77aae0b7") : secret "metallb-memberlist" not found Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.722116 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-t622w"] Jan 22 15:38:43 crc kubenswrapper[4825]: W0122 15:38:43.733662 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bed4152_970e_428e_a8bd_21fe165bde92.slice/crio-e80adb6961e128ec0ce7d11384d8b10ba75b58ec4ab62a8cfd9ec0f489b0bcdd WatchSource:0}: Error finding container e80adb6961e128ec0ce7d11384d8b10ba75b58ec4ab62a8cfd9ec0f489b0bcdd: Status 404 returned error can't find the container with id e80adb6961e128ec0ce7d11384d8b10ba75b58ec4ab62a8cfd9ec0f489b0bcdd Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.833199 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-t622w" event={"ID":"7bed4152-970e-428e-a8bd-21fe165bde92","Type":"ContainerStarted","Data":"e80adb6961e128ec0ce7d11384d8b10ba75b58ec4ab62a8cfd9ec0f489b0bcdd"} Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.837133 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"eac857e41ad4cabcca191dd3b1935e5dba159a3c96babab0774ddd71c078f79c"} Jan 22 15:38:43 crc kubenswrapper[4825]: I0122 15:38:43.991065 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c"] Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.696096 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.729367 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6b567ea4-6df9-4c62-8caf-c8bb77aae0b7-memberlist\") pod \"speaker-lk5df\" (UID: \"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7\") " pod="metallb-system/speaker-lk5df" Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.855580 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-t622w" event={"ID":"7bed4152-970e-428e-a8bd-21fe165bde92","Type":"ContainerStarted","Data":"53ebb2862e719fbcde5451d163b6569896943f171a9733095eacee547fbf97f8"} Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.855623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-t622w" event={"ID":"7bed4152-970e-428e-a8bd-21fe165bde92","Type":"ContainerStarted","Data":"8f37fb77baf378bb87344ba45e1c5e04f22617bd592f6a7d820bde391240c82e"} Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.856690 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.858709 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" event={"ID":"148811c4-5c9f-4c58-86f6-df32772b3fb9","Type":"ContainerStarted","Data":"12f61fd5db4683e734ce79baa2dec802d3406468e1287dc3bc5c72fc8e505395"} Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.880695 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-t622w" podStartSLOduration=2.880674593 podStartE2EDuration="2.880674593s" podCreationTimestamp="2026-01-22 15:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:38:44.878303996 +0000 UTC m=+871.639830916" watchObservedRunningTime="2026-01-22 15:38:44.880674593 +0000 UTC m=+871.642201503" Jan 22 15:38:44 crc kubenswrapper[4825]: I0122 15:38:44.954202 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lk5df" Jan 22 15:38:45 crc kubenswrapper[4825]: I0122 15:38:45.875435 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lk5df" event={"ID":"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7","Type":"ContainerStarted","Data":"b903c948597501aa8c028f6461de0d727f850ce3a2a27ad4a6f1080cab60c32e"} Jan 22 15:38:45 crc kubenswrapper[4825]: I0122 15:38:45.875756 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lk5df" event={"ID":"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7","Type":"ContainerStarted","Data":"91072f440fd66205af2dd867e56c3974580ae6983b8b2a02cb360dfd69901861"} Jan 22 15:38:46 crc kubenswrapper[4825]: I0122 15:38:46.887051 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lk5df" event={"ID":"6b567ea4-6df9-4c62-8caf-c8bb77aae0b7","Type":"ContainerStarted","Data":"93731393ed833273f4c270924bf1ac67a8a977c1e986a304726c0c7eb6c68adf"} Jan 22 15:38:46 crc kubenswrapper[4825]: I0122 15:38:46.887096 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lk5df" Jan 22 15:38:46 crc kubenswrapper[4825]: I0122 15:38:46.904786 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lk5df" podStartSLOduration=4.9047712 podStartE2EDuration="4.9047712s" podCreationTimestamp="2026-01-22 15:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:38:46.900003695 +0000 UTC m=+873.661530605" watchObservedRunningTime="2026-01-22 15:38:46.9047712 +0000 UTC m=+873.666298110" Jan 22 15:38:53 crc kubenswrapper[4825]: I0122 15:38:53.216119 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-t622w" Jan 22 15:38:55 crc kubenswrapper[4825]: I0122 15:38:55.983337 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" event={"ID":"148811c4-5c9f-4c58-86f6-df32772b3fb9","Type":"ContainerStarted","Data":"14dff4e10b4a480f102dff006cf29f9ff5d8595a9aa4c7c59675b294e7c3ff0e"} Jan 22 15:38:55 crc kubenswrapper[4825]: I0122 15:38:55.983970 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:38:55 crc kubenswrapper[4825]: I0122 15:38:55.986335 4825 generic.go:334] "Generic (PLEG): container finished" podID="0773281d-f402-46e0-ae19-32a82824046b" containerID="f4f5c91467b5476ad1287ce32ff3adb0988d742f2f143b75d2f836372b3e7527" exitCode=0 Jan 22 15:38:55 crc kubenswrapper[4825]: I0122 15:38:55.986377 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerDied","Data":"f4f5c91467b5476ad1287ce32ff3adb0988d742f2f143b75d2f836372b3e7527"} Jan 22 15:38:56 crc kubenswrapper[4825]: I0122 15:38:56.030389 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" podStartSLOduration=2.790967922 podStartE2EDuration="14.030365474s" podCreationTimestamp="2026-01-22 15:38:42 +0000 UTC" firstStartedPulling="2026-01-22 15:38:44.002503825 +0000 UTC m=+870.764030735" lastFinishedPulling="2026-01-22 15:38:55.241901367 +0000 UTC m=+882.003428287" observedRunningTime="2026-01-22 15:38:56.003807578 +0000 UTC m=+882.765334518" watchObservedRunningTime="2026-01-22 15:38:56.030365474 +0000 UTC m=+882.791892404" Jan 22 15:38:56 crc kubenswrapper[4825]: I0122 15:38:56.999101 4825 generic.go:334] "Generic (PLEG): container finished" podID="0773281d-f402-46e0-ae19-32a82824046b" containerID="d846e265056030c1dac51dde586ba0ed3eda0d0b6c2a6172fa32a6df6b373be4" exitCode=0 Jan 22 15:38:56 crc kubenswrapper[4825]: I0122 15:38:56.999243 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerDied","Data":"d846e265056030c1dac51dde586ba0ed3eda0d0b6c2a6172fa32a6df6b373be4"} Jan 22 15:38:58 crc kubenswrapper[4825]: I0122 15:38:58.007671 4825 generic.go:334] "Generic (PLEG): container finished" podID="0773281d-f402-46e0-ae19-32a82824046b" containerID="ad1c41fe91bff6f5634635a5c85efef70b52bca01e66b71453ccd189639be1a3" exitCode=0 Jan 22 15:38:58 crc kubenswrapper[4825]: I0122 15:38:58.007766 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerDied","Data":"ad1c41fe91bff6f5634635a5c85efef70b52bca01e66b71453ccd189639be1a3"} Jan 22 15:38:59 crc kubenswrapper[4825]: I0122 15:38:59.018071 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"7e6fd1bea570f6443dabdfcde3457340f7ef60ca1b944cb91aadbcae0851b0c9"} Jan 22 15:38:59 crc kubenswrapper[4825]: I0122 15:38:59.018451 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"73793624d0219ed6cea7afb457e90264b093602f2e03c65f2df743a0134bf77f"} Jan 22 15:38:59 crc kubenswrapper[4825]: I0122 15:38:59.018473 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"8dc88b2de6816890e0e21ee4e4206066477048016c088d22bd1b7bb68d005bc4"} Jan 22 15:38:59 crc kubenswrapper[4825]: I0122 15:38:59.018488 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"41f99307a04d81058684c8473ffafc4e44a5d3cae09c163461df282356a763c9"} Jan 22 15:39:00 crc kubenswrapper[4825]: I0122 15:39:00.027436 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"5493fccf331940f20beaf15dfe1883afed035fa9ccc51999d64f1d635ba47abf"} Jan 22 15:39:00 crc kubenswrapper[4825]: I0122 15:39:00.027676 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kt4gs" event={"ID":"0773281d-f402-46e0-ae19-32a82824046b","Type":"ContainerStarted","Data":"c3c6b3fd74f6cabe4af232962834dba429201aee734590d24be252987ff34c09"} Jan 22 15:39:00 crc kubenswrapper[4825]: I0122 15:39:00.027814 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:39:00 crc kubenswrapper[4825]: I0122 15:39:00.059380 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kt4gs" podStartSLOduration=5.994555282 podStartE2EDuration="18.059355691s" podCreationTimestamp="2026-01-22 15:38:42 +0000 UTC" firstStartedPulling="2026-01-22 15:38:43.201669917 +0000 UTC m=+869.963196817" lastFinishedPulling="2026-01-22 15:38:55.266470306 +0000 UTC m=+882.027997226" observedRunningTime="2026-01-22 15:39:00.057260701 +0000 UTC m=+886.818787611" watchObservedRunningTime="2026-01-22 15:39:00.059355691 +0000 UTC m=+886.820882621" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.586944 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rf6gj"] Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.590604 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.612506 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rf6gj"] Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.625786 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-catalog-content\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.625862 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9zt\" (UniqueName: \"kubernetes.io/projected/c9ee7521-c441-45e4-899b-08834e5cec02-kube-api-access-vp9zt\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.625885 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-utilities\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.726859 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-catalog-content\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.726944 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9zt\" (UniqueName: \"kubernetes.io/projected/c9ee7521-c441-45e4-899b-08834e5cec02-kube-api-access-vp9zt\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.726963 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-utilities\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.727680 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-catalog-content\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.729701 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-utilities\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.748494 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9zt\" (UniqueName: \"kubernetes.io/projected/c9ee7521-c441-45e4-899b-08834e5cec02-kube-api-access-vp9zt\") pod \"community-operators-rf6gj\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:02 crc kubenswrapper[4825]: I0122 15:39:02.916641 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:03 crc kubenswrapper[4825]: I0122 15:39:03.156715 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:39:03 crc kubenswrapper[4825]: I0122 15:39:03.229289 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:39:04 crc kubenswrapper[4825]: I0122 15:39:04.139508 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rf6gj"] Jan 22 15:39:04 crc kubenswrapper[4825]: W0122 15:39:04.141181 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ee7521_c441_45e4_899b_08834e5cec02.slice/crio-1ba85d04b9bcc10da4627d8ca331c8f7cd41dd8584daee9b49d3d860a116c6b9 WatchSource:0}: Error finding container 1ba85d04b9bcc10da4627d8ca331c8f7cd41dd8584daee9b49d3d860a116c6b9: Status 404 returned error can't find the container with id 1ba85d04b9bcc10da4627d8ca331c8f7cd41dd8584daee9b49d3d860a116c6b9 Jan 22 15:39:04 crc kubenswrapper[4825]: I0122 15:39:04.172972 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerStarted","Data":"1ba85d04b9bcc10da4627d8ca331c8f7cd41dd8584daee9b49d3d860a116c6b9"} Jan 22 15:39:04 crc kubenswrapper[4825]: I0122 15:39:04.959816 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lk5df" Jan 22 15:39:05 crc kubenswrapper[4825]: I0122 15:39:05.182050 4825 generic.go:334] "Generic (PLEG): container finished" podID="c9ee7521-c441-45e4-899b-08834e5cec02" containerID="f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af" exitCode=0 Jan 22 15:39:05 crc kubenswrapper[4825]: I0122 15:39:05.182128 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerDied","Data":"f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af"} Jan 22 15:39:05 crc kubenswrapper[4825]: I0122 15:39:05.184217 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 15:39:06 crc kubenswrapper[4825]: I0122 15:39:06.300109 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerStarted","Data":"6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6"} Jan 22 15:39:07 crc kubenswrapper[4825]: I0122 15:39:07.308538 4825 generic.go:334] "Generic (PLEG): container finished" podID="c9ee7521-c441-45e4-899b-08834e5cec02" containerID="6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6" exitCode=0 Jan 22 15:39:07 crc kubenswrapper[4825]: I0122 15:39:07.308615 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerDied","Data":"6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6"} Jan 22 15:39:08 crc kubenswrapper[4825]: I0122 15:39:08.321593 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerStarted","Data":"e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2"} Jan 22 15:39:08 crc kubenswrapper[4825]: I0122 15:39:08.346003 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rf6gj" podStartSLOduration=3.792026397 podStartE2EDuration="6.345963339s" podCreationTimestamp="2026-01-22 15:39:02 +0000 UTC" firstStartedPulling="2026-01-22 15:39:05.183877452 +0000 UTC m=+891.945404372" lastFinishedPulling="2026-01-22 15:39:07.737814384 +0000 UTC m=+894.499341314" observedRunningTime="2026-01-22 15:39:08.339339701 +0000 UTC m=+895.100866621" watchObservedRunningTime="2026-01-22 15:39:08.345963339 +0000 UTC m=+895.107490249" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.374144 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xcglg"] Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.375193 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.378339 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.378424 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.378844 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ff864" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.399087 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xcglg"] Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.571791 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwqx5\" (UniqueName: \"kubernetes.io/projected/28529c7d-badd-40d9-a46d-2b2632765ce6-kube-api-access-dwqx5\") pod \"openstack-operator-index-xcglg\" (UID: \"28529c7d-badd-40d9-a46d-2b2632765ce6\") " pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.673260 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwqx5\" (UniqueName: \"kubernetes.io/projected/28529c7d-badd-40d9-a46d-2b2632765ce6-kube-api-access-dwqx5\") pod \"openstack-operator-index-xcglg\" (UID: \"28529c7d-badd-40d9-a46d-2b2632765ce6\") " pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.691650 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwqx5\" (UniqueName: \"kubernetes.io/projected/28529c7d-badd-40d9-a46d-2b2632765ce6-kube-api-access-dwqx5\") pod \"openstack-operator-index-xcglg\" (UID: \"28529c7d-badd-40d9-a46d-2b2632765ce6\") " pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:11 crc kubenswrapper[4825]: I0122 15:39:11.701674 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:12 crc kubenswrapper[4825]: I0122 15:39:12.100780 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xcglg"] Jan 22 15:39:12 crc kubenswrapper[4825]: W0122 15:39:12.110163 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28529c7d_badd_40d9_a46d_2b2632765ce6.slice/crio-e1faceebfdf5d8d61c66a431e4b4e5d609f08f100faacc09c742a73971cffa68 WatchSource:0}: Error finding container e1faceebfdf5d8d61c66a431e4b4e5d609f08f100faacc09c742a73971cffa68: Status 404 returned error can't find the container with id e1faceebfdf5d8d61c66a431e4b4e5d609f08f100faacc09c742a73971cffa68 Jan 22 15:39:12 crc kubenswrapper[4825]: I0122 15:39:12.352484 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xcglg" event={"ID":"28529c7d-badd-40d9-a46d-2b2632765ce6","Type":"ContainerStarted","Data":"e1faceebfdf5d8d61c66a431e4b4e5d609f08f100faacc09c742a73971cffa68"} Jan 22 15:39:12 crc kubenswrapper[4825]: I0122 15:39:12.917535 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:12 crc kubenswrapper[4825]: I0122 15:39:12.917615 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:13 crc kubenswrapper[4825]: I0122 15:39:13.165470 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kt4gs" Jan 22 15:39:13 crc kubenswrapper[4825]: I0122 15:39:13.182789 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:13 crc kubenswrapper[4825]: I0122 15:39:13.363013 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zk72c" Jan 22 15:39:13 crc kubenswrapper[4825]: I0122 15:39:13.406311 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:16 crc kubenswrapper[4825]: I0122 15:39:16.966865 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l7259"] Jan 22 15:39:16 crc kubenswrapper[4825]: I0122 15:39:16.969023 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.000434 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7259"] Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.169114 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-utilities\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.169224 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-catalog-content\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.169288 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnf6\" (UniqueName: \"kubernetes.io/projected/f3d2a06f-d565-462c-82d2-c912eb3f9115-kube-api-access-rrnf6\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.270154 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-utilities\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.270515 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-catalog-content\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.270626 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnf6\" (UniqueName: \"kubernetes.io/projected/f3d2a06f-d565-462c-82d2-c912eb3f9115-kube-api-access-rrnf6\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.270771 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-utilities\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.270888 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-catalog-content\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.292139 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnf6\" (UniqueName: \"kubernetes.io/projected/f3d2a06f-d565-462c-82d2-c912eb3f9115-kube-api-access-rrnf6\") pod \"redhat-marketplace-l7259\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:17 crc kubenswrapper[4825]: I0122 15:39:17.296724 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.031505 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7259"] Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.481085 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xcglg" event={"ID":"28529c7d-badd-40d9-a46d-2b2632765ce6","Type":"ContainerStarted","Data":"d491974970f7993f44daacf7b0b9f57698c4438d3b2c38c5137bc3fcbe3444f6"} Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.482541 4825 generic.go:334] "Generic (PLEG): container finished" podID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerID="7b4a073a59b9bb8a633bb59305dfc3f69ee220cba753e0b85668d22322153ca8" exitCode=0 Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.482601 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7259" event={"ID":"f3d2a06f-d565-462c-82d2-c912eb3f9115","Type":"ContainerDied","Data":"7b4a073a59b9bb8a633bb59305dfc3f69ee220cba753e0b85668d22322153ca8"} Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.482635 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7259" event={"ID":"f3d2a06f-d565-462c-82d2-c912eb3f9115","Type":"ContainerStarted","Data":"7fc7d89444269f14bf234e364fc0a598257eaacb386c23faafc90cc4e4bd42b0"} Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.524050 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xcglg" podStartSLOduration=2.081399259 podStartE2EDuration="7.52403306s" podCreationTimestamp="2026-01-22 15:39:11 +0000 UTC" firstStartedPulling="2026-01-22 15:39:12.111894061 +0000 UTC m=+898.873420971" lastFinishedPulling="2026-01-22 15:39:17.554527862 +0000 UTC m=+904.316054772" observedRunningTime="2026-01-22 15:39:18.501326224 +0000 UTC m=+905.262853154" watchObservedRunningTime="2026-01-22 15:39:18.52403306 +0000 UTC m=+905.285559970" Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.755251 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rf6gj"] Jan 22 15:39:18 crc kubenswrapper[4825]: I0122 15:39:18.755805 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rf6gj" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="registry-server" containerID="cri-o://e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2" gracePeriod=2 Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.211178 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.341063 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-utilities\") pod \"c9ee7521-c441-45e4-899b-08834e5cec02\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.341144 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-catalog-content\") pod \"c9ee7521-c441-45e4-899b-08834e5cec02\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.341242 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp9zt\" (UniqueName: \"kubernetes.io/projected/c9ee7521-c441-45e4-899b-08834e5cec02-kube-api-access-vp9zt\") pod \"c9ee7521-c441-45e4-899b-08834e5cec02\" (UID: \"c9ee7521-c441-45e4-899b-08834e5cec02\") " Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.341896 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-utilities" (OuterVolumeSpecName: "utilities") pod "c9ee7521-c441-45e4-899b-08834e5cec02" (UID: "c9ee7521-c441-45e4-899b-08834e5cec02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.347200 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ee7521-c441-45e4-899b-08834e5cec02-kube-api-access-vp9zt" (OuterVolumeSpecName: "kube-api-access-vp9zt") pod "c9ee7521-c441-45e4-899b-08834e5cec02" (UID: "c9ee7521-c441-45e4-899b-08834e5cec02"). InnerVolumeSpecName "kube-api-access-vp9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.405848 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9ee7521-c441-45e4-899b-08834e5cec02" (UID: "c9ee7521-c441-45e4-899b-08834e5cec02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.443224 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp9zt\" (UniqueName: \"kubernetes.io/projected/c9ee7521-c441-45e4-899b-08834e5cec02-kube-api-access-vp9zt\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.443262 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.443280 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ee7521-c441-45e4-899b-08834e5cec02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.491752 4825 generic.go:334] "Generic (PLEG): container finished" podID="c9ee7521-c441-45e4-899b-08834e5cec02" containerID="e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2" exitCode=0 Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.491802 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf6gj" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.491818 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerDied","Data":"e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2"} Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.491893 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf6gj" event={"ID":"c9ee7521-c441-45e4-899b-08834e5cec02","Type":"ContainerDied","Data":"1ba85d04b9bcc10da4627d8ca331c8f7cd41dd8584daee9b49d3d860a116c6b9"} Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.491916 4825 scope.go:117] "RemoveContainer" containerID="e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.511471 4825 scope.go:117] "RemoveContainer" containerID="6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.535263 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rf6gj"] Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.535303 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rf6gj"] Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.544543 4825 scope.go:117] "RemoveContainer" containerID="f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.569530 4825 scope.go:117] "RemoveContainer" containerID="e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2" Jan 22 15:39:19 crc kubenswrapper[4825]: E0122 15:39:19.570021 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2\": container with ID starting with e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2 not found: ID does not exist" containerID="e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.570075 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2"} err="failed to get container status \"e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2\": rpc error: code = NotFound desc = could not find container \"e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2\": container with ID starting with e889fa752df1c67f190ad4fe7f9c87293a7e69ece374d6dd7794cf2588f772d2 not found: ID does not exist" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.570099 4825 scope.go:117] "RemoveContainer" containerID="6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6" Jan 22 15:39:19 crc kubenswrapper[4825]: E0122 15:39:19.570400 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6\": container with ID starting with 6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6 not found: ID does not exist" containerID="6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.570447 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6"} err="failed to get container status \"6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6\": rpc error: code = NotFound desc = could not find container \"6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6\": container with ID starting with 6cf20c01250de962b19ac812d3749dbeb7f2813f3fc03e4074e1348abc7894f6 not found: ID does not exist" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.570483 4825 scope.go:117] "RemoveContainer" containerID="f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af" Jan 22 15:39:19 crc kubenswrapper[4825]: E0122 15:39:19.570676 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af\": container with ID starting with f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af not found: ID does not exist" containerID="f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af" Jan 22 15:39:19 crc kubenswrapper[4825]: I0122 15:39:19.570698 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af"} err="failed to get container status \"f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af\": rpc error: code = NotFound desc = could not find container \"f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af\": container with ID starting with f51eb36b24dee266f97227e55fcf6c349ae636f20de460ca99375c754c03a6af not found: ID does not exist" Jan 22 15:39:20 crc kubenswrapper[4825]: I0122 15:39:20.504767 4825 generic.go:334] "Generic (PLEG): container finished" podID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerID="410b8286cefb53faf667bd3d7158d82d5bcb2675f525e4832ef4784399c55838" exitCode=0 Jan 22 15:39:20 crc kubenswrapper[4825]: I0122 15:39:20.504948 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7259" event={"ID":"f3d2a06f-d565-462c-82d2-c912eb3f9115","Type":"ContainerDied","Data":"410b8286cefb53faf667bd3d7158d82d5bcb2675f525e4832ef4784399c55838"} Jan 22 15:39:21 crc kubenswrapper[4825]: I0122 15:39:21.531873 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" path="/var/lib/kubelet/pods/c9ee7521-c441-45e4-899b-08834e5cec02/volumes" Jan 22 15:39:21 crc kubenswrapper[4825]: I0122 15:39:21.702681 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:21 crc kubenswrapper[4825]: I0122 15:39:21.702718 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:21 crc kubenswrapper[4825]: I0122 15:39:21.732645 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:22 crc kubenswrapper[4825]: I0122 15:39:22.547888 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xcglg" Jan 22 15:39:23 crc kubenswrapper[4825]: I0122 15:39:23.534466 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7259" event={"ID":"f3d2a06f-d565-462c-82d2-c912eb3f9115","Type":"ContainerStarted","Data":"997b39dc6b5065bb0b45dc8e4b312bf99508e0cdd5eb5b91e36625375a5ebafe"} Jan 22 15:39:23 crc kubenswrapper[4825]: I0122 15:39:23.558647 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l7259" podStartSLOduration=2.874885214 podStartE2EDuration="7.558613803s" podCreationTimestamp="2026-01-22 15:39:16 +0000 UTC" firstStartedPulling="2026-01-22 15:39:18.483757064 +0000 UTC m=+905.245283974" lastFinishedPulling="2026-01-22 15:39:23.167485613 +0000 UTC m=+909.929012563" observedRunningTime="2026-01-22 15:39:23.55219865 +0000 UTC m=+910.313725590" watchObservedRunningTime="2026-01-22 15:39:23.558613803 +0000 UTC m=+910.320140763" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.807732 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46"] Jan 22 15:39:24 crc kubenswrapper[4825]: E0122 15:39:24.808694 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="registry-server" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.808819 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="registry-server" Jan 22 15:39:24 crc kubenswrapper[4825]: E0122 15:39:24.808915 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="extract-content" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.808992 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="extract-content" Jan 22 15:39:24 crc kubenswrapper[4825]: E0122 15:39:24.809082 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="extract-utilities" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.809158 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="extract-utilities" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.809363 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ee7521-c441-45e4-899b-08834e5cec02" containerName="registry-server" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.810658 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.813033 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-twhr7" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.817162 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46"] Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.852456 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-bundle\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.852535 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ggt\" (UniqueName: \"kubernetes.io/projected/e45ace87-f1a4-47c4-9582-c56942dee924-kube-api-access-78ggt\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.852625 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-util\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.953820 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-util\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.953905 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-bundle\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.953960 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78ggt\" (UniqueName: \"kubernetes.io/projected/e45ace87-f1a4-47c4-9582-c56942dee924-kube-api-access-78ggt\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.954992 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-bundle\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.955005 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-util\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:24 crc kubenswrapper[4825]: I0122 15:39:24.977146 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78ggt\" (UniqueName: \"kubernetes.io/projected/e45ace87-f1a4-47c4-9582-c56942dee924-kube-api-access-78ggt\") pod \"9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:25 crc kubenswrapper[4825]: I0122 15:39:25.126661 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:25 crc kubenswrapper[4825]: I0122 15:39:25.587749 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46"] Jan 22 15:39:26 crc kubenswrapper[4825]: I0122 15:39:26.565206 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" event={"ID":"e45ace87-f1a4-47c4-9582-c56942dee924","Type":"ContainerStarted","Data":"f589b03de6140e37aecf73aa45f53f140580e16e360e7354a73a5372b5f54756"} Jan 22 15:39:27 crc kubenswrapper[4825]: I0122 15:39:27.297367 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:27 crc kubenswrapper[4825]: I0122 15:39:27.297740 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:27 crc kubenswrapper[4825]: I0122 15:39:27.367894 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:27 crc kubenswrapper[4825]: I0122 15:39:27.572695 4825 generic.go:334] "Generic (PLEG): container finished" podID="e45ace87-f1a4-47c4-9582-c56942dee924" containerID="2b3598df0b7e777797e6d90cc23e509e3eaf55b24562b5619494043376fc2d71" exitCode=0 Jan 22 15:39:27 crc kubenswrapper[4825]: I0122 15:39:27.572746 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" event={"ID":"e45ace87-f1a4-47c4-9582-c56942dee924","Type":"ContainerDied","Data":"2b3598df0b7e777797e6d90cc23e509e3eaf55b24562b5619494043376fc2d71"} Jan 22 15:39:28 crc kubenswrapper[4825]: I0122 15:39:28.583618 4825 generic.go:334] "Generic (PLEG): container finished" podID="e45ace87-f1a4-47c4-9582-c56942dee924" containerID="ac2ba680d61eb0d028b4774d2050eba3ebb248bf7db14d0fd4b80f16b2640d75" exitCode=0 Jan 22 15:39:28 crc kubenswrapper[4825]: I0122 15:39:28.583724 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" event={"ID":"e45ace87-f1a4-47c4-9582-c56942dee924","Type":"ContainerDied","Data":"ac2ba680d61eb0d028b4774d2050eba3ebb248bf7db14d0fd4b80f16b2640d75"} Jan 22 15:39:29 crc kubenswrapper[4825]: I0122 15:39:29.601592 4825 generic.go:334] "Generic (PLEG): container finished" podID="e45ace87-f1a4-47c4-9582-c56942dee924" containerID="7b1143c543880b70ee997c995150c60506ad751f61377e41c6433f9a59aa8cf0" exitCode=0 Jan 22 15:39:29 crc kubenswrapper[4825]: I0122 15:39:29.602078 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" event={"ID":"e45ace87-f1a4-47c4-9582-c56942dee924","Type":"ContainerDied","Data":"7b1143c543880b70ee997c995150c60506ad751f61377e41c6433f9a59aa8cf0"} Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.882765 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.978360 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78ggt\" (UniqueName: \"kubernetes.io/projected/e45ace87-f1a4-47c4-9582-c56942dee924-kube-api-access-78ggt\") pod \"e45ace87-f1a4-47c4-9582-c56942dee924\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.978537 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-bundle\") pod \"e45ace87-f1a4-47c4-9582-c56942dee924\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.978608 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-util\") pod \"e45ace87-f1a4-47c4-9582-c56942dee924\" (UID: \"e45ace87-f1a4-47c4-9582-c56942dee924\") " Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.980697 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-bundle" (OuterVolumeSpecName: "bundle") pod "e45ace87-f1a4-47c4-9582-c56942dee924" (UID: "e45ace87-f1a4-47c4-9582-c56942dee924"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.985417 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45ace87-f1a4-47c4-9582-c56942dee924-kube-api-access-78ggt" (OuterVolumeSpecName: "kube-api-access-78ggt") pod "e45ace87-f1a4-47c4-9582-c56942dee924" (UID: "e45ace87-f1a4-47c4-9582-c56942dee924"). InnerVolumeSpecName "kube-api-access-78ggt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:39:30 crc kubenswrapper[4825]: I0122 15:39:30.995382 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-util" (OuterVolumeSpecName: "util") pod "e45ace87-f1a4-47c4-9582-c56942dee924" (UID: "e45ace87-f1a4-47c4-9582-c56942dee924"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:39:31 crc kubenswrapper[4825]: I0122 15:39:31.080258 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78ggt\" (UniqueName: \"kubernetes.io/projected/e45ace87-f1a4-47c4-9582-c56942dee924-kube-api-access-78ggt\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:31 crc kubenswrapper[4825]: I0122 15:39:31.080307 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:31 crc kubenswrapper[4825]: I0122 15:39:31.080326 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e45ace87-f1a4-47c4-9582-c56942dee924-util\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:31 crc kubenswrapper[4825]: I0122 15:39:31.620869 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" event={"ID":"e45ace87-f1a4-47c4-9582-c56942dee924","Type":"ContainerDied","Data":"f589b03de6140e37aecf73aa45f53f140580e16e360e7354a73a5372b5f54756"} Jan 22 15:39:31 crc kubenswrapper[4825]: I0122 15:39:31.620910 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46" Jan 22 15:39:31 crc kubenswrapper[4825]: I0122 15:39:31.620921 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f589b03de6140e37aecf73aa45f53f140580e16e360e7354a73a5372b5f54756" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.138243 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj"] Jan 22 15:39:34 crc kubenswrapper[4825]: E0122 15:39:34.138958 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="pull" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.138996 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="pull" Jan 22 15:39:34 crc kubenswrapper[4825]: E0122 15:39:34.139028 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="extract" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.139037 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="extract" Jan 22 15:39:34 crc kubenswrapper[4825]: E0122 15:39:34.139046 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="util" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.139054 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="util" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.139244 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45ace87-f1a4-47c4-9582-c56942dee924" containerName="extract" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.139815 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.142037 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjvkl" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.159584 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj"] Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.225304 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4xh\" (UniqueName: \"kubernetes.io/projected/92157fff-bfe1-4bcb-ba7d-617b72c1781c-kube-api-access-zh4xh\") pod \"openstack-operator-controller-init-7bf86bd88b-knxbj\" (UID: \"92157fff-bfe1-4bcb-ba7d-617b72c1781c\") " pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.326782 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4xh\" (UniqueName: \"kubernetes.io/projected/92157fff-bfe1-4bcb-ba7d-617b72c1781c-kube-api-access-zh4xh\") pod \"openstack-operator-controller-init-7bf86bd88b-knxbj\" (UID: \"92157fff-bfe1-4bcb-ba7d-617b72c1781c\") " pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.365162 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4xh\" (UniqueName: \"kubernetes.io/projected/92157fff-bfe1-4bcb-ba7d-617b72c1781c-kube-api-access-zh4xh\") pod \"openstack-operator-controller-init-7bf86bd88b-knxbj\" (UID: \"92157fff-bfe1-4bcb-ba7d-617b72c1781c\") " pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.458387 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:34 crc kubenswrapper[4825]: I0122 15:39:34.697451 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj"] Jan 22 15:39:35 crc kubenswrapper[4825]: I0122 15:39:35.541914 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:39:35 crc kubenswrapper[4825]: I0122 15:39:35.542285 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:39:35 crc kubenswrapper[4825]: I0122 15:39:35.699547 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" event={"ID":"92157fff-bfe1-4bcb-ba7d-617b72c1781c","Type":"ContainerStarted","Data":"f70f13e3c99a1d37582624d7b31f3fb401d85081ddc66b80cd77570221979910"} Jan 22 15:39:37 crc kubenswrapper[4825]: I0122 15:39:37.491764 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:40 crc kubenswrapper[4825]: I0122 15:39:40.163263 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7259"] Jan 22 15:39:40 crc kubenswrapper[4825]: I0122 15:39:40.164034 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l7259" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="registry-server" containerID="cri-o://997b39dc6b5065bb0b45dc8e4b312bf99508e0cdd5eb5b91e36625375a5ebafe" gracePeriod=2 Jan 22 15:39:40 crc kubenswrapper[4825]: I0122 15:39:40.788602 4825 generic.go:334] "Generic (PLEG): container finished" podID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerID="997b39dc6b5065bb0b45dc8e4b312bf99508e0cdd5eb5b91e36625375a5ebafe" exitCode=0 Jan 22 15:39:40 crc kubenswrapper[4825]: I0122 15:39:40.788674 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7259" event={"ID":"f3d2a06f-d565-462c-82d2-c912eb3f9115","Type":"ContainerDied","Data":"997b39dc6b5065bb0b45dc8e4b312bf99508e0cdd5eb5b91e36625375a5ebafe"} Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.333182 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.411400 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrnf6\" (UniqueName: \"kubernetes.io/projected/f3d2a06f-d565-462c-82d2-c912eb3f9115-kube-api-access-rrnf6\") pod \"f3d2a06f-d565-462c-82d2-c912eb3f9115\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.411836 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-catalog-content\") pod \"f3d2a06f-d565-462c-82d2-c912eb3f9115\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.411885 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-utilities\") pod \"f3d2a06f-d565-462c-82d2-c912eb3f9115\" (UID: \"f3d2a06f-d565-462c-82d2-c912eb3f9115\") " Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.412911 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-utilities" (OuterVolumeSpecName: "utilities") pod "f3d2a06f-d565-462c-82d2-c912eb3f9115" (UID: "f3d2a06f-d565-462c-82d2-c912eb3f9115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.418391 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d2a06f-d565-462c-82d2-c912eb3f9115-kube-api-access-rrnf6" (OuterVolumeSpecName: "kube-api-access-rrnf6") pod "f3d2a06f-d565-462c-82d2-c912eb3f9115" (UID: "f3d2a06f-d565-462c-82d2-c912eb3f9115"). InnerVolumeSpecName "kube-api-access-rrnf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.446355 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3d2a06f-d565-462c-82d2-c912eb3f9115" (UID: "f3d2a06f-d565-462c-82d2-c912eb3f9115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.513071 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrnf6\" (UniqueName: \"kubernetes.io/projected/f3d2a06f-d565-462c-82d2-c912eb3f9115-kube-api-access-rrnf6\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.513106 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.513116 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3d2a06f-d565-462c-82d2-c912eb3f9115-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.812629 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7259" event={"ID":"f3d2a06f-d565-462c-82d2-c912eb3f9115","Type":"ContainerDied","Data":"7fc7d89444269f14bf234e364fc0a598257eaacb386c23faafc90cc4e4bd42b0"} Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.812672 4825 scope.go:117] "RemoveContainer" containerID="997b39dc6b5065bb0b45dc8e4b312bf99508e0cdd5eb5b91e36625375a5ebafe" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.812675 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7259" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.813765 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" event={"ID":"92157fff-bfe1-4bcb-ba7d-617b72c1781c","Type":"ContainerStarted","Data":"810a55fff6baec42bf47d2b022ab560ebe85ab710c91fd60e0a550720be6b50c"} Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.813864 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.833458 4825 scope.go:117] "RemoveContainer" containerID="410b8286cefb53faf667bd3d7158d82d5bcb2675f525e4832ef4784399c55838" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.849125 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" podStartSLOduration=1.3116328959999999 podStartE2EDuration="8.849109674s" podCreationTimestamp="2026-01-22 15:39:34 +0000 UTC" firstStartedPulling="2026-01-22 15:39:34.719019683 +0000 UTC m=+921.480546593" lastFinishedPulling="2026-01-22 15:39:42.256496451 +0000 UTC m=+929.018023371" observedRunningTime="2026-01-22 15:39:42.845341929 +0000 UTC m=+929.606868849" watchObservedRunningTime="2026-01-22 15:39:42.849109674 +0000 UTC m=+929.610636594" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.858520 4825 scope.go:117] "RemoveContainer" containerID="7b4a073a59b9bb8a633bb59305dfc3f69ee220cba753e0b85668d22322153ca8" Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.873784 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7259"] Jan 22 15:39:42 crc kubenswrapper[4825]: I0122 15:39:42.883760 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7259"] Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.381600 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 15:39:43 crc kubenswrapper[4825]: E0122 15:39:43.382640 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="registry-server" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.382677 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="registry-server" Jan 22 15:39:43 crc kubenswrapper[4825]: E0122 15:39:43.382712 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="extract-utilities" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.382730 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="extract-utilities" Jan 22 15:39:43 crc kubenswrapper[4825]: E0122 15:39:43.382769 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="extract-content" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.382786 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="extract-content" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.383145 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" containerName="registry-server" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.385030 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.396073 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.526747 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d2a06f-d565-462c-82d2-c912eb3f9115" path="/var/lib/kubelet/pods/f3d2a06f-d565-462c-82d2-c912eb3f9115/volumes" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.531034 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-utilities\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.531073 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-catalog-content\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.531122 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkcg\" (UniqueName: \"kubernetes.io/projected/697daa05-e987-4bf2-a924-df734a327432-kube-api-access-fzkcg\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.632018 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-utilities\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.632073 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-catalog-content\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.632150 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzkcg\" (UniqueName: \"kubernetes.io/projected/697daa05-e987-4bf2-a924-df734a327432-kube-api-access-fzkcg\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.632563 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-utilities\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.632832 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-catalog-content\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.663214 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzkcg\" (UniqueName: \"kubernetes.io/projected/697daa05-e987-4bf2-a924-df734a327432-kube-api-access-fzkcg\") pod \"certified-operators-jjktq\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:43 crc kubenswrapper[4825]: I0122 15:39:43.706749 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:44 crc kubenswrapper[4825]: I0122 15:39:44.160839 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 15:39:44 crc kubenswrapper[4825]: W0122 15:39:44.179562 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697daa05_e987_4bf2_a924_df734a327432.slice/crio-4764f6f36e1cb016f75fe471c0682ac831b3893c6603261fc69d10718c8c4a34 WatchSource:0}: Error finding container 4764f6f36e1cb016f75fe471c0682ac831b3893c6603261fc69d10718c8c4a34: Status 404 returned error can't find the container with id 4764f6f36e1cb016f75fe471c0682ac831b3893c6603261fc69d10718c8c4a34 Jan 22 15:39:44 crc kubenswrapper[4825]: I0122 15:39:44.840657 4825 generic.go:334] "Generic (PLEG): container finished" podID="697daa05-e987-4bf2-a924-df734a327432" containerID="2dcdcc821f68a530550bcb0f8cbc8c44c632da717e781ca17af3bd043a17c8ee" exitCode=0 Jan 22 15:39:44 crc kubenswrapper[4825]: I0122 15:39:44.840713 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjktq" event={"ID":"697daa05-e987-4bf2-a924-df734a327432","Type":"ContainerDied","Data":"2dcdcc821f68a530550bcb0f8cbc8c44c632da717e781ca17af3bd043a17c8ee"} Jan 22 15:39:44 crc kubenswrapper[4825]: I0122 15:39:44.840746 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjktq" event={"ID":"697daa05-e987-4bf2-a924-df734a327432","Type":"ContainerStarted","Data":"4764f6f36e1cb016f75fe471c0682ac831b3893c6603261fc69d10718c8c4a34"} Jan 22 15:39:51 crc kubenswrapper[4825]: I0122 15:39:51.895812 4825 generic.go:334] "Generic (PLEG): container finished" podID="697daa05-e987-4bf2-a924-df734a327432" containerID="026cc17cbf80bd9c769a6601d20723034d43e91c3b10d035cecec41cd2979445" exitCode=0 Jan 22 15:39:51 crc kubenswrapper[4825]: I0122 15:39:51.895873 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjktq" event={"ID":"697daa05-e987-4bf2-a924-df734a327432","Type":"ContainerDied","Data":"026cc17cbf80bd9c769a6601d20723034d43e91c3b10d035cecec41cd2979445"} Jan 22 15:39:52 crc kubenswrapper[4825]: I0122 15:39:52.909116 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjktq" event={"ID":"697daa05-e987-4bf2-a924-df734a327432","Type":"ContainerStarted","Data":"1f93e42b2f89d4469144392eb85732191db5686b95c0857ff7d51a50ece9a020"} Jan 22 15:39:52 crc kubenswrapper[4825]: I0122 15:39:52.929505 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjktq" podStartSLOduration=2.501431371 podStartE2EDuration="9.929482299s" podCreationTimestamp="2026-01-22 15:39:43 +0000 UTC" firstStartedPulling="2026-01-22 15:39:44.843379361 +0000 UTC m=+931.604906281" lastFinishedPulling="2026-01-22 15:39:52.271430289 +0000 UTC m=+939.032957209" observedRunningTime="2026-01-22 15:39:52.927790612 +0000 UTC m=+939.689317562" watchObservedRunningTime="2026-01-22 15:39:52.929482299 +0000 UTC m=+939.691009209" Jan 22 15:39:53 crc kubenswrapper[4825]: I0122 15:39:53.707699 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:53 crc kubenswrapper[4825]: I0122 15:39:53.708044 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:39:54 crc kubenswrapper[4825]: I0122 15:39:54.462043 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7bf86bd88b-knxbj" Jan 22 15:39:54 crc kubenswrapper[4825]: I0122 15:39:54.751847 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jjktq" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="registry-server" probeResult="failure" output=< Jan 22 15:39:54 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:39:54 crc kubenswrapper[4825]: > Jan 22 15:40:03 crc kubenswrapper[4825]: I0122 15:40:03.868147 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:40:04 crc kubenswrapper[4825]: I0122 15:40:04.125972 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 15:40:04 crc kubenswrapper[4825]: I0122 15:40:04.355022 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 15:40:04 crc kubenswrapper[4825]: I0122 15:40:04.532143 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4mc8"] Jan 22 15:40:04 crc kubenswrapper[4825]: I0122 15:40:04.532419 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4mc8" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="registry-server" containerID="cri-o://22032d6cae11896c967f79aff90c28517e704b5c254785c11981dbae08d23629" gracePeriod=2 Jan 22 15:40:04 crc kubenswrapper[4825]: I0122 15:40:04.544752 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87fzs"] Jan 22 15:40:04 crc kubenswrapper[4825]: I0122 15:40:04.545044 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-87fzs" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="registry-server" containerID="cri-o://c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5" gracePeriod=2 Jan 22 15:40:05 crc kubenswrapper[4825]: I0122 15:40:05.541562 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:40:05 crc kubenswrapper[4825]: I0122 15:40:05.541839 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:40:06 crc kubenswrapper[4825]: E0122 15:40:06.100578 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5 is running failed: container process not found" containerID="c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 15:40:06 crc kubenswrapper[4825]: E0122 15:40:06.100971 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5 is running failed: container process not found" containerID="c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 15:40:06 crc kubenswrapper[4825]: E0122 15:40:06.101385 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5 is running failed: container process not found" containerID="c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 15:40:06 crc kubenswrapper[4825]: E0122 15:40:06.101468 4825 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-87fzs" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="registry-server" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.154852 4825 generic.go:334] "Generic (PLEG): container finished" podID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerID="c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5" exitCode=0 Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.155030 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerDied","Data":"c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5"} Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.157656 4825 generic.go:334] "Generic (PLEG): container finished" podID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerID="22032d6cae11896c967f79aff90c28517e704b5c254785c11981dbae08d23629" exitCode=0 Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.157686 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerDied","Data":"22032d6cae11896c967f79aff90c28517e704b5c254785c11981dbae08d23629"} Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.157707 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4mc8" event={"ID":"3b091b06-afc3-4d38-9ad4-16003718f00e","Type":"ContainerDied","Data":"9e125f2ed404df41a89cfd031927a336c50015686ae05a56a281d60d38d3a37b"} Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.157720 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e125f2ed404df41a89cfd031927a336c50015686ae05a56a281d60d38d3a37b" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.193138 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.302379 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.337090 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv4mx\" (UniqueName: \"kubernetes.io/projected/3b091b06-afc3-4d38-9ad4-16003718f00e-kube-api-access-xv4mx\") pod \"3b091b06-afc3-4d38-9ad4-16003718f00e\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.337215 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-utilities\") pod \"3b091b06-afc3-4d38-9ad4-16003718f00e\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.337264 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-catalog-content\") pod \"3b091b06-afc3-4d38-9ad4-16003718f00e\" (UID: \"3b091b06-afc3-4d38-9ad4-16003718f00e\") " Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.338606 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-utilities" (OuterVolumeSpecName: "utilities") pod "3b091b06-afc3-4d38-9ad4-16003718f00e" (UID: "3b091b06-afc3-4d38-9ad4-16003718f00e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.345868 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b091b06-afc3-4d38-9ad4-16003718f00e-kube-api-access-xv4mx" (OuterVolumeSpecName: "kube-api-access-xv4mx") pod "3b091b06-afc3-4d38-9ad4-16003718f00e" (UID: "3b091b06-afc3-4d38-9ad4-16003718f00e"). InnerVolumeSpecName "kube-api-access-xv4mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.400565 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b091b06-afc3-4d38-9ad4-16003718f00e" (UID: "3b091b06-afc3-4d38-9ad4-16003718f00e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.438824 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxbtw\" (UniqueName: \"kubernetes.io/projected/d4d396c4-dbe4-4672-af11-a5db1019b169-kube-api-access-lxbtw\") pod \"d4d396c4-dbe4-4672-af11-a5db1019b169\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.439247 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-utilities\") pod \"d4d396c4-dbe4-4672-af11-a5db1019b169\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.439359 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-catalog-content\") pod \"d4d396c4-dbe4-4672-af11-a5db1019b169\" (UID: \"d4d396c4-dbe4-4672-af11-a5db1019b169\") " Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.439694 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv4mx\" (UniqueName: \"kubernetes.io/projected/3b091b06-afc3-4d38-9ad4-16003718f00e-kube-api-access-xv4mx\") on node \"crc\" DevicePath \"\"" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.439782 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.439854 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b091b06-afc3-4d38-9ad4-16003718f00e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.440058 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-utilities" (OuterVolumeSpecName: "utilities") pod "d4d396c4-dbe4-4672-af11-a5db1019b169" (UID: "d4d396c4-dbe4-4672-af11-a5db1019b169"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.442851 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d396c4-dbe4-4672-af11-a5db1019b169-kube-api-access-lxbtw" (OuterVolumeSpecName: "kube-api-access-lxbtw") pod "d4d396c4-dbe4-4672-af11-a5db1019b169" (UID: "d4d396c4-dbe4-4672-af11-a5db1019b169"). InnerVolumeSpecName "kube-api-access-lxbtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.487928 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4d396c4-dbe4-4672-af11-a5db1019b169" (UID: "d4d396c4-dbe4-4672-af11-a5db1019b169"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.541249 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.541289 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d396c4-dbe4-4672-af11-a5db1019b169-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:40:06 crc kubenswrapper[4825]: I0122 15:40:06.541302 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxbtw\" (UniqueName: \"kubernetes.io/projected/d4d396c4-dbe4-4672-af11-a5db1019b169-kube-api-access-lxbtw\") on node \"crc\" DevicePath \"\"" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.167024 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4mc8" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.167250 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87fzs" event={"ID":"d4d396c4-dbe4-4672-af11-a5db1019b169","Type":"ContainerDied","Data":"bd78776f08e1c26833bf9573dfe66f10b2055d6799170a59f3a48ee70285ca81"} Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.167335 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87fzs" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.168259 4825 scope.go:117] "RemoveContainer" containerID="c360f3be074989524756d1878fd5cc14498d9df4e99dff63e44787dd2e345fa5" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.187782 4825 scope.go:117] "RemoveContainer" containerID="28610f6fde7b6a4caf0263126c1cc3490a03d0adceb72aa689969839f9ec9c6c" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.208199 4825 scope.go:117] "RemoveContainer" containerID="82890d3418884942fabd00b4e732428874d6bfc2486552e870161af11c92479b" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.210202 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4mc8"] Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.221262 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4mc8"] Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.227159 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87fzs"] Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.231426 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-87fzs"] Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.528949 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" path="/var/lib/kubelet/pods/3b091b06-afc3-4d38-9ad4-16003718f00e/volumes" Jan 22 15:40:07 crc kubenswrapper[4825]: I0122 15:40:07.530643 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" path="/var/lib/kubelet/pods/d4d396c4-dbe4-4672-af11-a5db1019b169/volumes" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.203256 4825 scope.go:117] "RemoveContainer" containerID="22032d6cae11896c967f79aff90c28517e704b5c254785c11981dbae08d23629" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.221882 4825 scope.go:117] "RemoveContainer" containerID="83dfdb8a7f5c205630462313c7fc1cd8b9d0215c347ddf8429409272899b9a40" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.253010 4825 scope.go:117] "RemoveContainer" containerID="4d063c7264e8b26830dd2d8725e62b10e9fca7c519c1578f2368c87cd83d4563" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.621638 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh"] Jan 22 15:40:14 crc kubenswrapper[4825]: E0122 15:40:14.622000 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="extract-content" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622019 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="extract-content" Jan 22 15:40:14 crc kubenswrapper[4825]: E0122 15:40:14.622034 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="extract-utilities" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622041 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="extract-utilities" Jan 22 15:40:14 crc kubenswrapper[4825]: E0122 15:40:14.622049 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="registry-server" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622057 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="registry-server" Jan 22 15:40:14 crc kubenswrapper[4825]: E0122 15:40:14.622073 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="registry-server" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622079 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="registry-server" Jan 22 15:40:14 crc kubenswrapper[4825]: E0122 15:40:14.622095 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="extract-utilities" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622104 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="extract-utilities" Jan 22 15:40:14 crc kubenswrapper[4825]: E0122 15:40:14.622119 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="extract-content" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622126 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="extract-content" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622698 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b091b06-afc3-4d38-9ad4-16003718f00e" containerName="registry-server" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.622727 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d396c4-dbe4-4672-af11-a5db1019b169" containerName="registry-server" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.623897 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.634242 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ppm6v" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.651087 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.662114 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.666043 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.668941 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-58dtw" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.686730 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.688257 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.691914 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mg52d" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.701386 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.718636 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.752316 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.753453 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.756347 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-k4fd6" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.756598 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.757698 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.761207 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qbwhc" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.768509 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjvg\" (UniqueName: \"kubernetes.io/projected/d5ca055b-760a-4356-a32e-4b2358edbe73-kube-api-access-cdjvg\") pod \"cinder-operator-controller-manager-69cf5d4557-bj5dt\" (UID: \"d5ca055b-760a-4356-a32e-4b2358edbe73\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.768586 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6f8\" (UniqueName: \"kubernetes.io/projected/a2489602-cadb-4351-96b5-5727dbeb521d-kube-api-access-tt6f8\") pod \"barbican-operator-controller-manager-59dd8b7cbf-wtvfh\" (UID: \"a2489602-cadb-4351-96b5-5727dbeb521d\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.775052 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.785700 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.797037 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.798360 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.857910 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bs74x" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.859583 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.869366 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6w8g\" (UniqueName: \"kubernetes.io/projected/15cb9f6b-3766-4ac5-8272-cec4434eebcd-kube-api-access-s6w8g\") pod \"heat-operator-controller-manager-594c8c9d5d-fntv2\" (UID: \"15cb9f6b-3766-4ac5-8272-cec4434eebcd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.869464 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zgd\" (UniqueName: \"kubernetes.io/projected/fc586b00-c7d2-47f7-bb91-8d9740048538-kube-api-access-s4zgd\") pod \"glance-operator-controller-manager-78fdd796fd-swmt9\" (UID: \"fc586b00-c7d2-47f7-bb91-8d9740048538\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.869490 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gclfq\" (UniqueName: \"kubernetes.io/projected/fb806cbf-2796-48f0-980a-5ab87a967cc7-kube-api-access-gclfq\") pod \"designate-operator-controller-manager-b45d7bf98-p8542\" (UID: \"fb806cbf-2796-48f0-980a-5ab87a967cc7\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.869536 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjvg\" (UniqueName: \"kubernetes.io/projected/d5ca055b-760a-4356-a32e-4b2358edbe73-kube-api-access-cdjvg\") pod \"cinder-operator-controller-manager-69cf5d4557-bj5dt\" (UID: \"d5ca055b-760a-4356-a32e-4b2358edbe73\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.869562 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6f8\" (UniqueName: \"kubernetes.io/projected/a2489602-cadb-4351-96b5-5727dbeb521d-kube-api-access-tt6f8\") pod \"barbican-operator-controller-manager-59dd8b7cbf-wtvfh\" (UID: \"a2489602-cadb-4351-96b5-5727dbeb521d\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.870941 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.872010 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.875313 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.878808 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s8f9m" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.888241 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.904311 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.905315 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.906955 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-rnwfl" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.909042 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.919053 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.920204 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.924632 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-tlpzm" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.936188 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjvg\" (UniqueName: \"kubernetes.io/projected/d5ca055b-760a-4356-a32e-4b2358edbe73-kube-api-access-cdjvg\") pod \"cinder-operator-controller-manager-69cf5d4557-bj5dt\" (UID: \"d5ca055b-760a-4356-a32e-4b2358edbe73\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.938635 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6f8\" (UniqueName: \"kubernetes.io/projected/a2489602-cadb-4351-96b5-5727dbeb521d-kube-api-access-tt6f8\") pod \"barbican-operator-controller-manager-59dd8b7cbf-wtvfh\" (UID: \"a2489602-cadb-4351-96b5-5727dbeb521d\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.954043 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.955112 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.956893 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xc7rj" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.959572 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.965809 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.970402 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx2zf\" (UniqueName: \"kubernetes.io/projected/e3094c32-0018-4519-8606-e0e3a3420425-kube-api-access-kx2zf\") pod \"horizon-operator-controller-manager-77d5c5b54f-j9zxq\" (UID: \"e3094c32-0018-4519-8606-e0e3a3420425\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.970470 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6w8g\" (UniqueName: \"kubernetes.io/projected/15cb9f6b-3766-4ac5-8272-cec4434eebcd-kube-api-access-s6w8g\") pod \"heat-operator-controller-manager-594c8c9d5d-fntv2\" (UID: \"15cb9f6b-3766-4ac5-8272-cec4434eebcd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.970522 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zgd\" (UniqueName: \"kubernetes.io/projected/fc586b00-c7d2-47f7-bb91-8d9740048538-kube-api-access-s4zgd\") pod \"glance-operator-controller-manager-78fdd796fd-swmt9\" (UID: \"fc586b00-c7d2-47f7-bb91-8d9740048538\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.970554 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gclfq\" (UniqueName: \"kubernetes.io/projected/fb806cbf-2796-48f0-980a-5ab87a967cc7-kube-api-access-gclfq\") pod \"designate-operator-controller-manager-b45d7bf98-p8542\" (UID: \"fb806cbf-2796-48f0-980a-5ab87a967cc7\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.985867 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd"] Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.986859 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.990272 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cmm78" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.992850 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:40:14 crc kubenswrapper[4825]: I0122 15:40:14.998578 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w"] Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:14.999657 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.007626 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-t284k" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.008586 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6w8g\" (UniqueName: \"kubernetes.io/projected/15cb9f6b-3766-4ac5-8272-cec4434eebcd-kube-api-access-s6w8g\") pod \"heat-operator-controller-manager-594c8c9d5d-fntv2\" (UID: \"15cb9f6b-3766-4ac5-8272-cec4434eebcd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.025439 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c"] Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.026445 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.028795 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5rz7v" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.029119 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.044468 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gclfq\" (UniqueName: \"kubernetes.io/projected/fb806cbf-2796-48f0-980a-5ab87a967cc7-kube-api-access-gclfq\") pod \"designate-operator-controller-manager-b45d7bf98-p8542\" (UID: \"fb806cbf-2796-48f0-980a-5ab87a967cc7\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.045467 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.060717 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zgd\" (UniqueName: \"kubernetes.io/projected/fc586b00-c7d2-47f7-bb91-8d9740048538-kube-api-access-s4zgd\") pod \"glance-operator-controller-manager-78fdd796fd-swmt9\" (UID: \"fc586b00-c7d2-47f7-bb91-8d9740048538\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.075035 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.075104 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jft\" (UniqueName: \"kubernetes.io/projected/aa7fc7cf-0d1f-4caa-905b-add971620c70-kube-api-access-q7jft\") pod \"keystone-operator-controller-manager-b8b6d4659-4j7zw\" (UID: \"aa7fc7cf-0d1f-4caa-905b-add971620c70\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.075134 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64kg\" (UniqueName: \"kubernetes.io/projected/02608e48-40b1-4179-ac70-99aad7341dbf-kube-api-access-l64kg\") pod \"manila-operator-controller-manager-78c6999f6f-8jv66\" (UID: \"02608e48-40b1-4179-ac70-99aad7341dbf\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.075172 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdtq\" (UniqueName: \"kubernetes.io/projected/8739e7a5-b09f-4908-b1e8-893e06e8c0d5-kube-api-access-qpdtq\") pod \"ironic-operator-controller-manager-69d6c9f5b8-w2xrz\" (UID: \"8739e7a5-b09f-4908-b1e8-893e06e8c0d5\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.075199 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjzc\" (UniqueName: \"kubernetes.io/projected/d369dfbc-d830-4221-aca1-386666bca9a7-kube-api-access-jqjzc\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.075252 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx2zf\" (UniqueName: \"kubernetes.io/projected/e3094c32-0018-4519-8606-e0e3a3420425-kube-api-access-kx2zf\") pod \"horizon-operator-controller-manager-77d5c5b54f-j9zxq\" (UID: \"e3094c32-0018-4519-8606-e0e3a3420425\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.089367 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w"] Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.095501 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd"] Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.208241 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.208728 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.232718 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx2zf\" (UniqueName: \"kubernetes.io/projected/e3094c32-0018-4519-8606-e0e3a3420425-kube-api-access-kx2zf\") pod \"horizon-operator-controller-manager-77d5c5b54f-j9zxq\" (UID: \"e3094c32-0018-4519-8606-e0e3a3420425\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233006 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233533 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tfz\" (UniqueName: \"kubernetes.io/projected/dd5e1412-572a-4014-ae4b-69415ab62800-kube-api-access-l2tfz\") pod \"neutron-operator-controller-manager-5d8f59fb49-tfd8w\" (UID: \"dd5e1412-572a-4014-ae4b-69415ab62800\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233564 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7jft\" (UniqueName: \"kubernetes.io/projected/aa7fc7cf-0d1f-4caa-905b-add971620c70-kube-api-access-q7jft\") pod \"keystone-operator-controller-manager-b8b6d4659-4j7zw\" (UID: \"aa7fc7cf-0d1f-4caa-905b-add971620c70\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233620 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l64kg\" (UniqueName: \"kubernetes.io/projected/02608e48-40b1-4179-ac70-99aad7341dbf-kube-api-access-l64kg\") pod \"manila-operator-controller-manager-78c6999f6f-8jv66\" (UID: \"02608e48-40b1-4179-ac70-99aad7341dbf\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233664 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdtq\" (UniqueName: \"kubernetes.io/projected/8739e7a5-b09f-4908-b1e8-893e06e8c0d5-kube-api-access-qpdtq\") pod \"ironic-operator-controller-manager-69d6c9f5b8-w2xrz\" (UID: \"8739e7a5-b09f-4908-b1e8-893e06e8c0d5\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233695 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjzc\" (UniqueName: \"kubernetes.io/projected/d369dfbc-d830-4221-aca1-386666bca9a7-kube-api-access-jqjzc\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233733 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786sl\" (UniqueName: \"kubernetes.io/projected/db3cdd8c-dec4-42cc-bb80-f29321423ab7-kube-api-access-786sl\") pod \"mariadb-operator-controller-manager-c87fff755-hxvkd\" (UID: \"db3cdd8c-dec4-42cc-bb80-f29321423ab7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.233751 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjpnz\" (UniqueName: \"kubernetes.io/projected/f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5-kube-api-access-jjpnz\") pod \"nova-operator-controller-manager-6b8bc8d87d-5js5c\" (UID: \"f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:40:15 crc kubenswrapper[4825]: E0122 15:40:15.233868 4825 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:15 crc kubenswrapper[4825]: E0122 15:40:15.233905 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert podName:d369dfbc-d830-4221-aca1-386666bca9a7 nodeName:}" failed. No retries permitted until 2026-01-22 15:40:15.733889065 +0000 UTC m=+962.495415975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert") pod "infra-operator-controller-manager-54ccf4f85d-c6r4m" (UID: "d369dfbc-d830-4221-aca1-386666bca9a7") : secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.270721 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l64kg\" (UniqueName: \"kubernetes.io/projected/02608e48-40b1-4179-ac70-99aad7341dbf-kube-api-access-l64kg\") pod \"manila-operator-controller-manager-78c6999f6f-8jv66\" (UID: \"02608e48-40b1-4179-ac70-99aad7341dbf\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.282447 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdtq\" (UniqueName: \"kubernetes.io/projected/8739e7a5-b09f-4908-b1e8-893e06e8c0d5-kube-api-access-qpdtq\") pod \"ironic-operator-controller-manager-69d6c9f5b8-w2xrz\" (UID: \"8739e7a5-b09f-4908-b1e8-893e06e8c0d5\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.291611 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.307701 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4"] Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.351526 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjzc\" (UniqueName: \"kubernetes.io/projected/d369dfbc-d830-4221-aca1-386666bca9a7-kube-api-access-jqjzc\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.484245 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.487740 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zqlh8" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.490457 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.492835 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tfz\" (UniqueName: \"kubernetes.io/projected/dd5e1412-572a-4014-ae4b-69415ab62800-kube-api-access-l2tfz\") pod \"neutron-operator-controller-manager-5d8f59fb49-tfd8w\" (UID: \"dd5e1412-572a-4014-ae4b-69415ab62800\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.492935 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-786sl\" (UniqueName: \"kubernetes.io/projected/db3cdd8c-dec4-42cc-bb80-f29321423ab7-kube-api-access-786sl\") pod \"mariadb-operator-controller-manager-c87fff755-hxvkd\" (UID: \"db3cdd8c-dec4-42cc-bb80-f29321423ab7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.493034 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjpnz\" (UniqueName: \"kubernetes.io/projected/f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5-kube-api-access-jjpnz\") pod \"nova-operator-controller-manager-6b8bc8d87d-5js5c\" (UID: \"f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.493112 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjhb\" (UniqueName: \"kubernetes.io/projected/886df838-09b9-423d-a8b6-3a5d428a0d30-kube-api-access-pcjhb\") pod \"octavia-operator-controller-manager-7bd9774b6-j6kq4\" (UID: \"886df838-09b9-423d-a8b6-3a5d428a0d30\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.808761 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7jft\" (UniqueName: \"kubernetes.io/projected/aa7fc7cf-0d1f-4caa-905b-add971620c70-kube-api-access-q7jft\") pod \"keystone-operator-controller-manager-b8b6d4659-4j7zw\" (UID: \"aa7fc7cf-0d1f-4caa-905b-add971620c70\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.817322 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tfz\" (UniqueName: \"kubernetes.io/projected/dd5e1412-572a-4014-ae4b-69415ab62800-kube-api-access-l2tfz\") pod \"neutron-operator-controller-manager-5d8f59fb49-tfd8w\" (UID: \"dd5e1412-572a-4014-ae4b-69415ab62800\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.824514 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-786sl\" (UniqueName: \"kubernetes.io/projected/db3cdd8c-dec4-42cc-bb80-f29321423ab7-kube-api-access-786sl\") pod \"mariadb-operator-controller-manager-c87fff755-hxvkd\" (UID: \"db3cdd8c-dec4-42cc-bb80-f29321423ab7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.900558 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjpnz\" (UniqueName: \"kubernetes.io/projected/f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5-kube-api-access-jjpnz\") pod \"nova-operator-controller-manager-6b8bc8d87d-5js5c\" (UID: \"f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.907511 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjhb\" (UniqueName: \"kubernetes.io/projected/886df838-09b9-423d-a8b6-3a5d428a0d30-kube-api-access-pcjhb\") pod \"octavia-operator-controller-manager-7bd9774b6-j6kq4\" (UID: \"886df838-09b9-423d-a8b6-3a5d428a0d30\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.907604 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:15 crc kubenswrapper[4825]: E0122 15:40:15.907905 4825 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:15 crc kubenswrapper[4825]: E0122 15:40:15.907971 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert podName:d369dfbc-d830-4221-aca1-386666bca9a7 nodeName:}" failed. No retries permitted until 2026-01-22 15:40:16.907949874 +0000 UTC m=+963.669476784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert") pod "infra-operator-controller-manager-54ccf4f85d-c6r4m" (UID: "d369dfbc-d830-4221-aca1-386666bca9a7") : secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.969917 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:40:15 crc kubenswrapper[4825]: I0122 15:40:15.972214 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjhb\" (UniqueName: \"kubernetes.io/projected/886df838-09b9-423d-a8b6-3a5d428a0d30-kube-api-access-pcjhb\") pod \"octavia-operator-controller-manager-7bd9774b6-j6kq4\" (UID: \"886df838-09b9-423d-a8b6-3a5d428a0d30\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.087285 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.087318 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.087334 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.088134 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.088391 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.088684 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.088707 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.088718 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.088879 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.089298 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.091471 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.091747 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rgld7" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.093554 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.094410 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kt7vp" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.094527 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-r9fjv" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.100923 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.101965 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.103794 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4qk87" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.106100 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.108620 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.109428 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.110564 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.111827 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z6hmn" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.124066 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.138846 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-86476"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.139833 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.140456 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.145445 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fwfg4" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.149243 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.166870 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-86476"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.176348 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.177467 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.179685 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8nf9z" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.200392 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.212094 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-848467994c-vvklq"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.213453 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.214212 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsz8\" (UniqueName: \"kubernetes.io/projected/7042acfa-6435-4bfc-812b-45bbb2523cf9-kube-api-access-9wsz8\") pod \"swift-operator-controller-manager-547cbdb99f-bmm4t\" (UID: \"7042acfa-6435-4bfc-812b-45bbb2523cf9\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.214348 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpnp\" (UniqueName: \"kubernetes.io/projected/059620d7-dcad-4c62-804b-f92566f0fd85-kube-api-access-xzpnp\") pod \"telemetry-operator-controller-manager-5fbc679d4d-92fqf\" (UID: \"059620d7-dcad-4c62-804b-f92566f0fd85\") " pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.214375 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.214434 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2scdn\" (UniqueName: \"kubernetes.io/projected/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-kube-api-access-2scdn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.214472 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtkq\" (UniqueName: \"kubernetes.io/projected/c07e4778-10c8-4074-ac87-f6088891be7c-kube-api-access-wbtkq\") pod \"ovn-operator-controller-manager-55db956ddc-9c9tx\" (UID: \"c07e4778-10c8-4074-ac87-f6088891be7c\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.214511 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhx9c\" (UniqueName: \"kubernetes.io/projected/24600886-dd49-445f-bdcd-ed919ec8fd02-kube-api-access-qhx9c\") pod \"placement-operator-controller-manager-5d646b7d76-5wxzj\" (UID: \"24600886-dd49-445f-bdcd-ed919ec8fd02\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.225507 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-848467994c-vvklq"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.238937 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.240025 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.240124 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.253541 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.253924 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cjfrj" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.254358 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.254622 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9lvds" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.258188 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh"] Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.534947 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpnp\" (UniqueName: \"kubernetes.io/projected/059620d7-dcad-4c62-804b-f92566f0fd85-kube-api-access-xzpnp\") pod \"telemetry-operator-controller-manager-5fbc679d4d-92fqf\" (UID: \"059620d7-dcad-4c62-804b-f92566f0fd85\") " pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535313 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535341 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2scdn\" (UniqueName: \"kubernetes.io/projected/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-kube-api-access-2scdn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535368 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtkq\" (UniqueName: \"kubernetes.io/projected/c07e4778-10c8-4074-ac87-f6088891be7c-kube-api-access-wbtkq\") pod \"ovn-operator-controller-manager-55db956ddc-9c9tx\" (UID: \"c07e4778-10c8-4074-ac87-f6088891be7c\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535408 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhx9c\" (UniqueName: \"kubernetes.io/projected/24600886-dd49-445f-bdcd-ed919ec8fd02-kube-api-access-qhx9c\") pod \"placement-operator-controller-manager-5d646b7d76-5wxzj\" (UID: \"24600886-dd49-445f-bdcd-ed919ec8fd02\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535441 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8lw\" (UniqueName: \"kubernetes.io/projected/022dd12c-03b8-43c4-92db-1a7654fcffeb-kube-api-access-qr8lw\") pod \"watcher-operator-controller-manager-5ffb9c6597-4ghsm\" (UID: \"022dd12c-03b8-43c4-92db-1a7654fcffeb\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535472 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjj4\" (UniqueName: \"kubernetes.io/projected/068b0c83-8bef-4835-871c-317c62e88f50-kube-api-access-bpjj4\") pod \"test-operator-controller-manager-69797bbcbd-86476\" (UID: \"068b0c83-8bef-4835-871c-317c62e88f50\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535502 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsz8\" (UniqueName: \"kubernetes.io/projected/7042acfa-6435-4bfc-812b-45bbb2523cf9-kube-api-access-9wsz8\") pod \"swift-operator-controller-manager-547cbdb99f-bmm4t\" (UID: \"7042acfa-6435-4bfc-812b-45bbb2523cf9\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535533 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j7d9\" (UniqueName: \"kubernetes.io/projected/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-kube-api-access-4j7d9\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535560 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrst\" (UniqueName: \"kubernetes.io/projected/83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c-kube-api-access-hcrst\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r7vn5\" (UID: \"83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535593 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.535626 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.537997 4825 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.538042 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert podName:10a92b09-1701-49e1-bb4c-e715ddf9ff4f nodeName:}" failed. No retries permitted until 2026-01-22 15:40:17.038027255 +0000 UTC m=+963.799554165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" (UID: "10a92b09-1701-49e1-bb4c-e715ddf9ff4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.580080 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.612002 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsz8\" (UniqueName: \"kubernetes.io/projected/7042acfa-6435-4bfc-812b-45bbb2523cf9-kube-api-access-9wsz8\") pod \"swift-operator-controller-manager-547cbdb99f-bmm4t\" (UID: \"7042acfa-6435-4bfc-812b-45bbb2523cf9\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.612002 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2scdn\" (UniqueName: \"kubernetes.io/projected/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-kube-api-access-2scdn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.614008 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpnp\" (UniqueName: \"kubernetes.io/projected/059620d7-dcad-4c62-804b-f92566f0fd85-kube-api-access-xzpnp\") pod \"telemetry-operator-controller-manager-5fbc679d4d-92fqf\" (UID: \"059620d7-dcad-4c62-804b-f92566f0fd85\") " pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.632171 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtkq\" (UniqueName: \"kubernetes.io/projected/c07e4778-10c8-4074-ac87-f6088891be7c-kube-api-access-wbtkq\") pod \"ovn-operator-controller-manager-55db956ddc-9c9tx\" (UID: \"c07e4778-10c8-4074-ac87-f6088891be7c\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.637328 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.637421 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8lw\" (UniqueName: \"kubernetes.io/projected/022dd12c-03b8-43c4-92db-1a7654fcffeb-kube-api-access-qr8lw\") pod \"watcher-operator-controller-manager-5ffb9c6597-4ghsm\" (UID: \"022dd12c-03b8-43c4-92db-1a7654fcffeb\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.637445 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpjj4\" (UniqueName: \"kubernetes.io/projected/068b0c83-8bef-4835-871c-317c62e88f50-kube-api-access-bpjj4\") pod \"test-operator-controller-manager-69797bbcbd-86476\" (UID: \"068b0c83-8bef-4835-871c-317c62e88f50\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.637479 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j7d9\" (UniqueName: \"kubernetes.io/projected/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-kube-api-access-4j7d9\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.637519 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcrst\" (UniqueName: \"kubernetes.io/projected/83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c-kube-api-access-hcrst\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r7vn5\" (UID: \"83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.637542 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.637660 4825 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.637705 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:17.137688198 +0000 UTC m=+963.899215108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "webhook-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.638076 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhx9c\" (UniqueName: \"kubernetes.io/projected/24600886-dd49-445f-bdcd-ed919ec8fd02-kube-api-access-qhx9c\") pod \"placement-operator-controller-manager-5d646b7d76-5wxzj\" (UID: \"24600886-dd49-445f-bdcd-ed919ec8fd02\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.638428 4825 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.638455 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:17.13844607 +0000 UTC m=+963.899972980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "metrics-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.753262 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpjj4\" (UniqueName: \"kubernetes.io/projected/068b0c83-8bef-4835-871c-317c62e88f50-kube-api-access-bpjj4\") pod \"test-operator-controller-manager-69797bbcbd-86476\" (UID: \"068b0c83-8bef-4835-871c-317c62e88f50\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.760787 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j7d9\" (UniqueName: \"kubernetes.io/projected/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-kube-api-access-4j7d9\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.762131 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcrst\" (UniqueName: \"kubernetes.io/projected/83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c-kube-api-access-hcrst\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r7vn5\" (UID: \"83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.768027 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8lw\" (UniqueName: \"kubernetes.io/projected/022dd12c-03b8-43c4-92db-1a7654fcffeb-kube-api-access-qr8lw\") pod \"watcher-operator-controller-manager-5ffb9c6597-4ghsm\" (UID: \"022dd12c-03b8-43c4-92db-1a7654fcffeb\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.945128 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.945328 4825 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: E0122 15:40:16.945420 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert podName:d369dfbc-d830-4221-aca1-386666bca9a7 nodeName:}" failed. No retries permitted until 2026-01-22 15:40:18.945394944 +0000 UTC m=+965.706921884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert") pod "infra-operator-controller-manager-54ccf4f85d-c6r4m" (UID: "d369dfbc-d830-4221-aca1-386666bca9a7") : secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:16 crc kubenswrapper[4825]: I0122 15:40:16.960686 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.147029 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.147097 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.147142 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:17 crc kubenswrapper[4825]: E0122 15:40:17.147249 4825 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 15:40:17 crc kubenswrapper[4825]: E0122 15:40:17.147323 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:18.147304574 +0000 UTC m=+964.908831484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "webhook-server-cert" not found Jan 22 15:40:17 crc kubenswrapper[4825]: E0122 15:40:17.147629 4825 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 15:40:17 crc kubenswrapper[4825]: E0122 15:40:17.147719 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:18.147695425 +0000 UTC m=+964.909222385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "metrics-server-cert" not found Jan 22 15:40:17 crc kubenswrapper[4825]: E0122 15:40:17.147640 4825 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:17 crc kubenswrapper[4825]: E0122 15:40:17.147765 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert podName:10a92b09-1701-49e1-bb4c-e715ddf9ff4f nodeName:}" failed. No retries permitted until 2026-01-22 15:40:18.147756206 +0000 UTC m=+964.909283206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" (UID: "10a92b09-1701-49e1-bb4c-e715ddf9ff4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.247747 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.275487 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" event={"ID":"a2489602-cadb-4351-96b5-5727dbeb521d","Type":"ContainerStarted","Data":"c9ed2d74af48c1bca31d49f294e0d56bce1981bfcac9c97f71e9a27742010e70"} Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.312018 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.589083 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.636459 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:40:17 crc kubenswrapper[4825]: I0122 15:40:17.664513 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt"] Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:17.970602 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:17.985531 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.237711 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542"] Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.241577 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.241694 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.241733 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:18 crc kubenswrapper[4825]: E0122 15:40:18.241844 4825 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 15:40:18 crc kubenswrapper[4825]: E0122 15:40:18.241887 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:20.241874538 +0000 UTC m=+967.003401448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "metrics-server-cert" not found Jan 22 15:40:18 crc kubenswrapper[4825]: E0122 15:40:18.241933 4825 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:18 crc kubenswrapper[4825]: E0122 15:40:18.241951 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert podName:10a92b09-1701-49e1-bb4c-e715ddf9ff4f nodeName:}" failed. No retries permitted until 2026-01-22 15:40:20.24194572 +0000 UTC m=+967.003472620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" (UID: "10a92b09-1701-49e1-bb4c-e715ddf9ff4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:18 crc kubenswrapper[4825]: E0122 15:40:18.242061 4825 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 15:40:18 crc kubenswrapper[4825]: E0122 15:40:18.242080 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:20.242074343 +0000 UTC m=+967.003601253 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "webhook-server-cert" not found Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.249666 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66"] Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.500236 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" event={"ID":"d5ca055b-760a-4356-a32e-4b2358edbe73","Type":"ContainerStarted","Data":"3f1bf6291cccc74acba2ebe3548662b0fbc0c7cb47bb00142d07b7a6fa82888a"} Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.529091 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9"] Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.537147 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2"] Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.584349 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq"] Jan 22 15:40:18 crc kubenswrapper[4825]: I0122 15:40:18.725430 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.114367 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.114754 4825 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.114843 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert podName:d369dfbc-d830-4221-aca1-386666bca9a7 nodeName:}" failed. No retries permitted until 2026-01-22 15:40:23.114818252 +0000 UTC m=+969.876345162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert") pod "infra-operator-controller-manager-54ccf4f85d-c6r4m" (UID: "d369dfbc-d830-4221-aca1-386666bca9a7") : secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.180635 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.196333 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.196568 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.213959 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.224020 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.394805 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.398957 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.423602 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.476706 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.492378 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-86476"] Jan 22 15:40:19 crc kubenswrapper[4825]: W0122 15:40:19.500417 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod068b0c83_8bef_4835_871c_317c62e88f50.slice/crio-05cc9c17ffe3488a8e7ed843ba41fe560932ebbda17768adc7ddbf5fa8206fe0 WatchSource:0}: Error finding container 05cc9c17ffe3488a8e7ed843ba41fe560932ebbda17768adc7ddbf5fa8206fe0: Status 404 returned error can't find the container with id 05cc9c17ffe3488a8e7ed843ba41fe560932ebbda17768adc7ddbf5fa8206fe0 Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.501747 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xzpnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fbc679d4d-92fqf_openstack-operators(059620d7-dcad-4c62-804b-f92566f0fd85): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.503017 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" podUID="059620d7-dcad-4c62-804b-f92566f0fd85" Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.508835 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf"] Jan 22 15:40:19 crc kubenswrapper[4825]: W0122 15:40:19.510995 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022dd12c_03b8_43c4_92db_1a7654fcffeb.slice/crio-84b00720a7169da3cc816fdfe81327ffd2cc241ca636f45787aa4d5cc2ca859f WatchSource:0}: Error finding container 84b00720a7169da3cc816fdfe81327ffd2cc241ca636f45787aa4d5cc2ca859f: Status 404 returned error can't find the container with id 84b00720a7169da3cc816fdfe81327ffd2cc241ca636f45787aa4d5cc2ca859f Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.528263 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qr8lw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-4ghsm_openstack-operators(022dd12c-03b8-43c4-92db-1a7654fcffeb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.529578 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" podUID="022dd12c-03b8-43c4-92db-1a7654fcffeb" Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.541835 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpjj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-86476_openstack-operators(068b0c83-8bef-4835-871c-317c62e88f50): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.543226 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" podUID="068b0c83-8bef-4835-871c-317c62e88f50" Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.547488 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" event={"ID":"dd5e1412-572a-4014-ae4b-69415ab62800","Type":"ContainerStarted","Data":"ac602561ccb45fd1c1adb2012bb42f72dab179e35c5652f07d96e77027b3dc07"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.547519 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" event={"ID":"fb806cbf-2796-48f0-980a-5ab87a967cc7","Type":"ContainerStarted","Data":"c37bc48c9fa39d8bf1903e057677d2c8941700666faac072c371b62b2ddaf355"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.547529 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" event={"ID":"24600886-dd49-445f-bdcd-ed919ec8fd02","Type":"ContainerStarted","Data":"fd79d25caef8cd4a017a14eb64fcc0100a606a86de1be8d4ad4655f666ac39c2"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.547541 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm"] Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.547555 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" event={"ID":"fc586b00-c7d2-47f7-bb91-8d9740048538","Type":"ContainerStarted","Data":"1afb6dc3d7b79a561068a0f7bb3a57c766af214e1ea42432df8b39c94e816935"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.552303 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" event={"ID":"db3cdd8c-dec4-42cc-bb80-f29321423ab7","Type":"ContainerStarted","Data":"5155f042d8b2874a10c6532add3c5b5b9597437d6e9c7e37f9e7c05f54cc17a0"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.555170 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" event={"ID":"8739e7a5-b09f-4908-b1e8-893e06e8c0d5","Type":"ContainerStarted","Data":"043fb64487cf1a32a0b83e784524a439e348ace4702f33bdcfce0ed9606c57fc"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.556690 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" event={"ID":"059620d7-dcad-4c62-804b-f92566f0fd85","Type":"ContainerStarted","Data":"a8d87c7d67e823a1b2ae7b4a2616dadcf1656dfab18874e2f4ce04f6c5d95745"} Jan 22 15:40:19 crc kubenswrapper[4825]: E0122 15:40:19.558744 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" podUID="059620d7-dcad-4c62-804b-f92566f0fd85" Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.559657 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" event={"ID":"7042acfa-6435-4bfc-812b-45bbb2523cf9","Type":"ContainerStarted","Data":"6e2cc03ebed7565f1e9b073cce94ada1bc02bd7837762e6885b63fb6aa08ecb6"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.566596 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" event={"ID":"aa7fc7cf-0d1f-4caa-905b-add971620c70","Type":"ContainerStarted","Data":"366906721c28d6e413e2b7cbacffe96d4c6a6d0c5d543ae57a63278bc9f96190"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.569247 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" event={"ID":"e3094c32-0018-4519-8606-e0e3a3420425","Type":"ContainerStarted","Data":"71e6daf03a4fe1bc0a736d197a32b566c6e41e89cc7a1bee20120be8d92f04cc"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.574453 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" event={"ID":"02608e48-40b1-4179-ac70-99aad7341dbf","Type":"ContainerStarted","Data":"652a9261bdec84c125c5ce661dec36daecb717b3eb9f44f938d6b7afe1903869"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.577080 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" event={"ID":"f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5","Type":"ContainerStarted","Data":"16a035c60f41abbaddba3555a42b201d6fd7e35d70c1e9fefe464a8341783323"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.579378 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" event={"ID":"c07e4778-10c8-4074-ac87-f6088891be7c","Type":"ContainerStarted","Data":"ba23d47ba48f5d4628ac9991a8ca04f4eb17cfbaab16be5ba875500d49e61b6c"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.580230 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" event={"ID":"83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c","Type":"ContainerStarted","Data":"59965e706fe4c28547da3c5e07bbcd0b36a69874b01ef46e0e3a073a2e2026a8"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.584022 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" event={"ID":"15cb9f6b-3766-4ac5-8272-cec4434eebcd","Type":"ContainerStarted","Data":"6a16c60fbcd991ccf31cf60ebae93522f77d5f710b09f0fcda0b41d42cf8a2c6"} Jan 22 15:40:19 crc kubenswrapper[4825]: I0122 15:40:19.585160 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" event={"ID":"886df838-09b9-423d-a8b6-3a5d428a0d30","Type":"ContainerStarted","Data":"ec3a620d2fd16eaa37891b97eb81abd6a5f431a7ad83c386a6c0ae08bb458dea"} Jan 22 15:40:20 crc kubenswrapper[4825]: I0122 15:40:20.268012 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:20 crc kubenswrapper[4825]: I0122 15:40:20.268081 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:20 crc kubenswrapper[4825]: I0122 15:40:20.268131 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.268176 4825 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.268257 4825 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.268275 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:24.268254121 +0000 UTC m=+971.029781031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "webhook-server-cert" not found Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.268357 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:24.268336393 +0000 UTC m=+971.029863303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "metrics-server-cert" not found Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.268363 4825 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.268455 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert podName:10a92b09-1701-49e1-bb4c-e715ddf9ff4f nodeName:}" failed. No retries permitted until 2026-01-22 15:40:24.268430366 +0000 UTC m=+971.029957336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" (UID: "10a92b09-1701-49e1-bb4c-e715ddf9ff4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:20 crc kubenswrapper[4825]: I0122 15:40:20.834276 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" event={"ID":"022dd12c-03b8-43c4-92db-1a7654fcffeb","Type":"ContainerStarted","Data":"84b00720a7169da3cc816fdfe81327ffd2cc241ca636f45787aa4d5cc2ca859f"} Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.837894 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" podUID="022dd12c-03b8-43c4-92db-1a7654fcffeb" Jan 22 15:40:20 crc kubenswrapper[4825]: I0122 15:40:20.853279 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" event={"ID":"068b0c83-8bef-4835-871c-317c62e88f50","Type":"ContainerStarted","Data":"05cc9c17ffe3488a8e7ed843ba41fe560932ebbda17768adc7ddbf5fa8206fe0"} Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.855039 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" podUID="059620d7-dcad-4c62-804b-f92566f0fd85" Jan 22 15:40:20 crc kubenswrapper[4825]: E0122 15:40:20.856957 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" podUID="068b0c83-8bef-4835-871c-317c62e88f50" Jan 22 15:40:21 crc kubenswrapper[4825]: E0122 15:40:21.894471 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" podUID="068b0c83-8bef-4835-871c-317c62e88f50" Jan 22 15:40:21 crc kubenswrapper[4825]: E0122 15:40:21.894792 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" podUID="022dd12c-03b8-43c4-92db-1a7654fcffeb" Jan 22 15:40:23 crc kubenswrapper[4825]: I0122 15:40:23.211873 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:23 crc kubenswrapper[4825]: E0122 15:40:23.212444 4825 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:23 crc kubenswrapper[4825]: E0122 15:40:23.212860 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert podName:d369dfbc-d830-4221-aca1-386666bca9a7 nodeName:}" failed. No retries permitted until 2026-01-22 15:40:31.212835512 +0000 UTC m=+977.974362412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert") pod "infra-operator-controller-manager-54ccf4f85d-c6r4m" (UID: "d369dfbc-d830-4221-aca1-386666bca9a7") : secret "infra-operator-webhook-server-cert" not found Jan 22 15:40:24 crc kubenswrapper[4825]: I0122 15:40:24.294037 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:24 crc kubenswrapper[4825]: I0122 15:40:24.294151 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:24 crc kubenswrapper[4825]: I0122 15:40:24.294207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:24 crc kubenswrapper[4825]: E0122 15:40:24.294395 4825 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 15:40:24 crc kubenswrapper[4825]: E0122 15:40:24.294493 4825 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:24 crc kubenswrapper[4825]: E0122 15:40:24.294565 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert podName:10a92b09-1701-49e1-bb4c-e715ddf9ff4f nodeName:}" failed. No retries permitted until 2026-01-22 15:40:32.294545507 +0000 UTC m=+979.056072417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" (UID: "10a92b09-1701-49e1-bb4c-e715ddf9ff4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 15:40:24 crc kubenswrapper[4825]: E0122 15:40:24.294589 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:32.294577898 +0000 UTC m=+979.056104808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "webhook-server-cert" not found Jan 22 15:40:24 crc kubenswrapper[4825]: E0122 15:40:24.294647 4825 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 15:40:24 crc kubenswrapper[4825]: E0122 15:40:24.294719 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs podName:42cd1b7d-1439-4cf8-aaf4-5e665128a25e nodeName:}" failed. No retries permitted until 2026-01-22 15:40:32.294679671 +0000 UTC m=+979.056206581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs") pod "openstack-operator-controller-manager-848467994c-vvklq" (UID: "42cd1b7d-1439-4cf8-aaf4-5e665128a25e") : secret "metrics-server-cert" not found Jan 22 15:40:31 crc kubenswrapper[4825]: I0122 15:40:31.297466 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:31 crc kubenswrapper[4825]: I0122 15:40:31.303925 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d369dfbc-d830-4221-aca1-386666bca9a7-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-c6r4m\" (UID: \"d369dfbc-d830-4221-aca1-386666bca9a7\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:31 crc kubenswrapper[4825]: I0122 15:40:31.471160 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.322081 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.322450 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.322497 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.327596 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-webhook-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.342313 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42cd1b7d-1439-4cf8-aaf4-5e665128a25e-metrics-certs\") pod \"openstack-operator-controller-manager-848467994c-vvklq\" (UID: \"42cd1b7d-1439-4cf8-aaf4-5e665128a25e\") " pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.366090 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10a92b09-1701-49e1-bb4c-e715ddf9ff4f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w\" (UID: \"10a92b09-1701-49e1-bb4c-e715ddf9ff4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.378007 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:32 crc kubenswrapper[4825]: I0122 15:40:32.509635 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:40:32 crc kubenswrapper[4825]: E0122 15:40:32.911517 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 22 15:40:32 crc kubenswrapper[4825]: E0122 15:40:32.911734 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gclfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-p8542_openstack-operators(fb806cbf-2796-48f0-980a-5ab87a967cc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:32 crc kubenswrapper[4825]: E0122 15:40:32.912928 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" podUID="fb806cbf-2796-48f0-980a-5ab87a967cc7" Jan 22 15:40:33 crc kubenswrapper[4825]: E0122 15:40:33.010028 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" podUID="fb806cbf-2796-48f0-980a-5ab87a967cc7" Jan 22 15:40:33 crc kubenswrapper[4825]: E0122 15:40:33.664220 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 22 15:40:33 crc kubenswrapper[4825]: E0122 15:40:33.664441 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-786sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-hxvkd_openstack-operators(db3cdd8c-dec4-42cc-bb80-f29321423ab7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:33 crc kubenswrapper[4825]: E0122 15:40:33.666285 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" podUID="db3cdd8c-dec4-42cc-bb80-f29321423ab7" Jan 22 15:40:34 crc kubenswrapper[4825]: E0122 15:40:34.017022 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" podUID="db3cdd8c-dec4-42cc-bb80-f29321423ab7" Jan 22 15:40:34 crc kubenswrapper[4825]: E0122 15:40:34.385998 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 22 15:40:34 crc kubenswrapper[4825]: E0122 15:40:34.386214 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4zgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-swmt9_openstack-operators(fc586b00-c7d2-47f7-bb91-8d9740048538): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:34 crc kubenswrapper[4825]: E0122 15:40:34.388144 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" podUID="fc586b00-c7d2-47f7-bb91-8d9740048538" Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.029692 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" podUID="fc586b00-c7d2-47f7-bb91-8d9740048538" Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.087580 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.087842 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjpnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-5js5c_openstack-operators(f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.089075 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" podUID="f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5" Jan 22 15:40:35 crc kubenswrapper[4825]: I0122 15:40:35.541398 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:40:35 crc kubenswrapper[4825]: I0122 15:40:35.541470 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:40:35 crc kubenswrapper[4825]: I0122 15:40:35.549516 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:40:35 crc kubenswrapper[4825]: I0122 15:40:35.550583 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c411fc0ec7bfe151046cb879197a0f2e7e0a4bd2d89c00b4f28d59849883ce9"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:40:35 crc kubenswrapper[4825]: I0122 15:40:35.550690 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://7c411fc0ec7bfe151046cb879197a0f2e7e0a4bd2d89c00b4f28d59849883ce9" gracePeriod=600 Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.700879 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.701083 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wsz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-bmm4t_openstack-operators(7042acfa-6435-4bfc-812b-45bbb2523cf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:35 crc kubenswrapper[4825]: E0122 15:40:35.703219 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" podUID="7042acfa-6435-4bfc-812b-45bbb2523cf9" Jan 22 15:40:36 crc kubenswrapper[4825]: I0122 15:40:36.037650 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="7c411fc0ec7bfe151046cb879197a0f2e7e0a4bd2d89c00b4f28d59849883ce9" exitCode=0 Jan 22 15:40:36 crc kubenswrapper[4825]: I0122 15:40:36.037749 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"7c411fc0ec7bfe151046cb879197a0f2e7e0a4bd2d89c00b4f28d59849883ce9"} Jan 22 15:40:36 crc kubenswrapper[4825]: I0122 15:40:36.037822 4825 scope.go:117] "RemoveContainer" containerID="5ec0593b524672c0173949c1239ba7fcb03695ca8acb4008e01a270f260b0ff1" Jan 22 15:40:36 crc kubenswrapper[4825]: E0122 15:40:36.039565 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" podUID="f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5" Jan 22 15:40:36 crc kubenswrapper[4825]: E0122 15:40:36.039759 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" podUID="7042acfa-6435-4bfc-812b-45bbb2523cf9" Jan 22 15:40:36 crc kubenswrapper[4825]: E0122 15:40:36.401477 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 22 15:40:36 crc kubenswrapper[4825]: E0122 15:40:36.401671 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbtkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-9c9tx_openstack-operators(c07e4778-10c8-4074-ac87-f6088891be7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:36 crc kubenswrapper[4825]: E0122 15:40:36.402912 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" podUID="c07e4778-10c8-4074-ac87-f6088891be7c" Jan 22 15:40:37 crc kubenswrapper[4825]: E0122 15:40:37.047693 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" podUID="c07e4778-10c8-4074-ac87-f6088891be7c" Jan 22 15:40:38 crc kubenswrapper[4825]: E0122 15:40:38.707932 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 22 15:40:38 crc kubenswrapper[4825]: E0122 15:40:38.708095 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6w8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-fntv2_openstack-operators(15cb9f6b-3766-4ac5-8272-cec4434eebcd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:38 crc kubenswrapper[4825]: E0122 15:40:38.709292 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" podUID="15cb9f6b-3766-4ac5-8272-cec4434eebcd" Jan 22 15:40:39 crc kubenswrapper[4825]: E0122 15:40:39.062643 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" podUID="15cb9f6b-3766-4ac5-8272-cec4434eebcd" Jan 22 15:40:39 crc kubenswrapper[4825]: E0122 15:40:39.361653 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 22 15:40:39 crc kubenswrapper[4825]: E0122 15:40:39.361898 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pcjhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-j6kq4_openstack-operators(886df838-09b9-423d-a8b6-3a5d428a0d30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:39 crc kubenswrapper[4825]: E0122 15:40:39.363394 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" podUID="886df838-09b9-423d-a8b6-3a5d428a0d30" Jan 22 15:40:40 crc kubenswrapper[4825]: E0122 15:40:40.069027 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" podUID="886df838-09b9-423d-a8b6-3a5d428a0d30" Jan 22 15:40:40 crc kubenswrapper[4825]: E0122 15:40:40.581171 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 22 15:40:40 crc kubenswrapper[4825]: E0122 15:40:40.581357 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l64kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-8jv66_openstack-operators(02608e48-40b1-4179-ac70-99aad7341dbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:40 crc kubenswrapper[4825]: E0122 15:40:40.582712 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" podUID="02608e48-40b1-4179-ac70-99aad7341dbf" Jan 22 15:40:41 crc kubenswrapper[4825]: E0122 15:40:41.075006 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" podUID="02608e48-40b1-4179-ac70-99aad7341dbf" Jan 22 15:40:41 crc kubenswrapper[4825]: E0122 15:40:41.209731 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 22 15:40:41 crc kubenswrapper[4825]: E0122 15:40:41.210263 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kx2zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-j9zxq_openstack-operators(e3094c32-0018-4519-8606-e0e3a3420425): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:41 crc kubenswrapper[4825]: E0122 15:40:41.212474 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" podUID="e3094c32-0018-4519-8606-e0e3a3420425" Jan 22 15:40:42 crc kubenswrapper[4825]: E0122 15:40:42.081151 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" podUID="e3094c32-0018-4519-8606-e0e3a3420425" Jan 22 15:40:52 crc kubenswrapper[4825]: E0122 15:40:52.418159 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 15:40:52 crc kubenswrapper[4825]: E0122 15:40:52.421121 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7jft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-4j7zw_openstack-operators(aa7fc7cf-0d1f-4caa-905b-add971620c70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:52 crc kubenswrapper[4825]: E0122 15:40:52.422426 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" podUID="aa7fc7cf-0d1f-4caa-905b-add971620c70" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.179774 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" podUID="aa7fc7cf-0d1f-4caa-905b-add971620c70" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.205952 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.206172 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpjj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-86476_openstack-operators(068b0c83-8bef-4835-871c-317c62e88f50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.207563 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" podUID="068b0c83-8bef-4835-871c-317c62e88f50" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.869453 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.869676 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcrst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-r7vn5_openstack-operators(83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:53 crc kubenswrapper[4825]: E0122 15:40:53.871762 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" podUID="83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c" Jan 22 15:40:54 crc kubenswrapper[4825]: E0122 15:40:54.187358 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" podUID="83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c" Jan 22 15:40:54 crc kubenswrapper[4825]: E0122 15:40:54.857294 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b" Jan 22 15:40:54 crc kubenswrapper[4825]: E0122 15:40:54.857898 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qr8lw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-4ghsm_openstack-operators(022dd12c-03b8-43c4-92db-1a7654fcffeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:54 crc kubenswrapper[4825]: E0122 15:40:54.859307 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" podUID="022dd12c-03b8-43c4-92db-1a7654fcffeb" Jan 22 15:40:55 crc kubenswrapper[4825]: E0122 15:40:55.438495 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc" Jan 22 15:40:55 crc kubenswrapper[4825]: E0122 15:40:55.438556 4825 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc" Jan 22 15:40:55 crc kubenswrapper[4825]: E0122 15:40:55.438710 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xzpnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fbc679d4d-92fqf_openstack-operators(059620d7-dcad-4c62-804b-f92566f0fd85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:40:55 crc kubenswrapper[4825]: E0122 15:40:55.439812 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" podUID="059620d7-dcad-4c62-804b-f92566f0fd85" Jan 22 15:40:56 crc kubenswrapper[4825]: I0122 15:40:56.937062 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-848467994c-vvklq"] Jan 22 15:40:56 crc kubenswrapper[4825]: I0122 15:40:56.944799 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m"] Jan 22 15:40:57 crc kubenswrapper[4825]: W0122 15:40:57.179858 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd369dfbc_d830_4221_aca1_386666bca9a7.slice/crio-8a550587f922bbd4d58e49e4aa136781348f32be3f561728658e0d84d28873c1 WatchSource:0}: Error finding container 8a550587f922bbd4d58e49e4aa136781348f32be3f561728658e0d84d28873c1: Status 404 returned error can't find the container with id 8a550587f922bbd4d58e49e4aa136781348f32be3f561728658e0d84d28873c1 Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.189864 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w"] Jan 22 15:40:57 crc kubenswrapper[4825]: W0122 15:40:57.203498 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a92b09_1701_49e1_bb4c_e715ddf9ff4f.slice/crio-9fd71ff4dad09919cee8df46031c00a29e520d439f0c9094df04b88ff7263921 WatchSource:0}: Error finding container 9fd71ff4dad09919cee8df46031c00a29e520d439f0c9094df04b88ff7263921: Status 404 returned error can't find the container with id 9fd71ff4dad09919cee8df46031c00a29e520d439f0c9094df04b88ff7263921 Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.321785 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" event={"ID":"8739e7a5-b09f-4908-b1e8-893e06e8c0d5","Type":"ContainerStarted","Data":"447f4bbf956ef74b6a4499981c0812f3dd7cf5bf9a4dff2950c3409916c97b53"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.323668 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.325877 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" event={"ID":"42cd1b7d-1439-4cf8-aaf4-5e665128a25e","Type":"ContainerStarted","Data":"d2115f62f6f4da3dbcc454a87c451a7cd1c932300af1b8354b6b305b68d7f5a1"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.327036 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" event={"ID":"d5ca055b-760a-4356-a32e-4b2358edbe73","Type":"ContainerStarted","Data":"11b6583d914c21008d4754a6f30e1d87523f51b2e18949f0a01577aba1cbd61e"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.327548 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.328540 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" event={"ID":"dd5e1412-572a-4014-ae4b-69415ab62800","Type":"ContainerStarted","Data":"749630c3719c162da6337317112a04e9be69db804195521ac1163a3d8bfd709d"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.328925 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.331394 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" event={"ID":"db3cdd8c-dec4-42cc-bb80-f29321423ab7","Type":"ContainerStarted","Data":"f1e0cfc6847f4bab5c860d5b9df2bc9546a458fb4179ec752f03c864a687b532"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.332093 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.333279 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" event={"ID":"a2489602-cadb-4351-96b5-5727dbeb521d","Type":"ContainerStarted","Data":"c03afde85f41b3617d6b0b49d1213e8a62551d09613949c1f15a620b999e5482"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.333817 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.334837 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" event={"ID":"c07e4778-10c8-4074-ac87-f6088891be7c","Type":"ContainerStarted","Data":"e2b6a47604d25304ae47ee7d3a2b365ead6a9a4df5855eca1604ef22a2d667ca"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.335484 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.336402 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" event={"ID":"d369dfbc-d830-4221-aca1-386666bca9a7","Type":"ContainerStarted","Data":"8a550587f922bbd4d58e49e4aa136781348f32be3f561728658e0d84d28873c1"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.351748 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"2e2cd9ccac91574642f11cb7b9691d30ced63e64cba6f480b19075fcb4ac2cb1"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.362304 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" event={"ID":"7042acfa-6435-4bfc-812b-45bbb2523cf9","Type":"ContainerStarted","Data":"f8ceb4add47356390ec052cc36e6faab1fd98cdf8a2da31f23daea652658c266"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.362747 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.366334 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" event={"ID":"fb806cbf-2796-48f0-980a-5ab87a967cc7","Type":"ContainerStarted","Data":"83df904bbd9d92e8a83a6fc2f57d42aadee1138f826db87aa379c3dd603da986"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.366904 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.368035 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" event={"ID":"24600886-dd49-445f-bdcd-ed919ec8fd02","Type":"ContainerStarted","Data":"6ac0201b03ed083deedfd1434dd6fc5feeeb84e563781adabcb9e5a5546fb5af"} Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.368466 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.374420 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" podStartSLOduration=10.143484448 podStartE2EDuration="43.374398563s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.218365804 +0000 UTC m=+965.979892714" lastFinishedPulling="2026-01-22 15:40:52.449279899 +0000 UTC m=+999.210806829" observedRunningTime="2026-01-22 15:40:57.371054 +0000 UTC m=+1004.132580910" watchObservedRunningTime="2026-01-22 15:40:57.374398563 +0000 UTC m=+1004.135925473" Jan 22 15:40:57 crc kubenswrapper[4825]: I0122 15:40:57.402319 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" podStartSLOduration=7.482589011 podStartE2EDuration="43.402296602s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:16.543459126 +0000 UTC m=+963.304986036" lastFinishedPulling="2026-01-22 15:40:52.463166717 +0000 UTC m=+999.224693627" observedRunningTime="2026-01-22 15:40:57.399765292 +0000 UTC m=+1004.161292212" watchObservedRunningTime="2026-01-22 15:40:57.402296602 +0000 UTC m=+1004.163823512" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.000583 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" podStartSLOduration=10.756345796 podStartE2EDuration="44.000567044s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.21894828 +0000 UTC m=+965.980475190" lastFinishedPulling="2026-01-22 15:40:52.463169518 +0000 UTC m=+999.224696438" observedRunningTime="2026-01-22 15:40:57.475546248 +0000 UTC m=+1004.237073158" watchObservedRunningTime="2026-01-22 15:40:58.000567044 +0000 UTC m=+1004.762093964" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.007141 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" podStartSLOduration=8.844467371 podStartE2EDuration="44.007122527s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.035868173 +0000 UTC m=+964.797395083" lastFinishedPulling="2026-01-22 15:40:53.198523319 +0000 UTC m=+999.960050239" observedRunningTime="2026-01-22 15:40:57.81033149 +0000 UTC m=+1004.571858400" watchObservedRunningTime="2026-01-22 15:40:58.007122527 +0000 UTC m=+1004.768649437" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.060481 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" podStartSLOduration=6.3346921290000004 podStartE2EDuration="43.060459827s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.437640039 +0000 UTC m=+966.199166939" lastFinishedPulling="2026-01-22 15:40:56.163407727 +0000 UTC m=+1002.924934637" observedRunningTime="2026-01-22 15:40:58.027341672 +0000 UTC m=+1004.788868572" watchObservedRunningTime="2026-01-22 15:40:58.060459827 +0000 UTC m=+1004.821986737" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.092041 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" podStartSLOduration=6.407451289 podStartE2EDuration="44.092021679s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.481501751 +0000 UTC m=+965.243028661" lastFinishedPulling="2026-01-22 15:40:56.166072131 +0000 UTC m=+1002.927599051" observedRunningTime="2026-01-22 15:40:58.084834238 +0000 UTC m=+1004.846361148" watchObservedRunningTime="2026-01-22 15:40:58.092021679 +0000 UTC m=+1004.853548589" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.163547 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" podStartSLOduration=8.046494263 podStartE2EDuration="44.163527266s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.33025909 +0000 UTC m=+966.091786000" lastFinishedPulling="2026-01-22 15:40:55.447292093 +0000 UTC m=+1002.208819003" observedRunningTime="2026-01-22 15:40:58.157388164 +0000 UTC m=+1004.918915074" watchObservedRunningTime="2026-01-22 15:40:58.163527266 +0000 UTC m=+1004.925054176" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.444021 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" podStartSLOduration=6.57171743 podStartE2EDuration="43.429024232s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.402450176 +0000 UTC m=+966.163977086" lastFinishedPulling="2026-01-22 15:40:56.259756978 +0000 UTC m=+1003.021283888" observedRunningTime="2026-01-22 15:40:58.367801072 +0000 UTC m=+1005.129327982" watchObservedRunningTime="2026-01-22 15:40:58.429024232 +0000 UTC m=+1005.190551142" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.470496 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" event={"ID":"fc586b00-c7d2-47f7-bb91-8d9740048538","Type":"ContainerStarted","Data":"3cd66c5815e18f27251a3a3bc95e89771832652b04dd4594b1c25bd40993df5e"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.471556 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.495201 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" event={"ID":"02608e48-40b1-4179-ac70-99aad7341dbf","Type":"ContainerStarted","Data":"2959fe9d68077354531e57db552821675c3b55a29bb3c362c0054411e3640302"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.495807 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.497569 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" podStartSLOduration=10.264706097 podStartE2EDuration="43.497545416s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.215773342 +0000 UTC m=+965.977300252" lastFinishedPulling="2026-01-22 15:40:52.448612651 +0000 UTC m=+999.210139571" observedRunningTime="2026-01-22 15:40:58.482117575 +0000 UTC m=+1005.243644485" watchObservedRunningTime="2026-01-22 15:40:58.497545416 +0000 UTC m=+1005.259072336" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.506135 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" event={"ID":"10a92b09-1701-49e1-bb4c-e715ddf9ff4f","Type":"ContainerStarted","Data":"9fd71ff4dad09919cee8df46031c00a29e520d439f0c9094df04b88ff7263921"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.568051 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" event={"ID":"15cb9f6b-3766-4ac5-8272-cec4434eebcd","Type":"ContainerStarted","Data":"649598e3dedd09aa715a303b2939593f4b59d742b907e833fa82a5a699aa5b6a"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.569281 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.582880 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" podStartSLOduration=6.578104514 podStartE2EDuration="44.582696284s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.549953113 +0000 UTC m=+965.311480023" lastFinishedPulling="2026-01-22 15:40:56.554544883 +0000 UTC m=+1003.316071793" observedRunningTime="2026-01-22 15:40:58.567304054 +0000 UTC m=+1005.328830964" watchObservedRunningTime="2026-01-22 15:40:58.582696284 +0000 UTC m=+1005.344223194" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.604315 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" event={"ID":"42cd1b7d-1439-4cf8-aaf4-5e665128a25e","Type":"ContainerStarted","Data":"7685c6a2997b38c09460ec8a81c19880eeb1d93f24d3332e9f01745c5e2b0adc"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.604825 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.643810 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" podStartSLOduration=6.646123066 podStartE2EDuration="44.643781391s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.48180511 +0000 UTC m=+965.243332020" lastFinishedPulling="2026-01-22 15:40:56.479463395 +0000 UTC m=+1003.240990345" observedRunningTime="2026-01-22 15:40:58.632002012 +0000 UTC m=+1005.393528922" watchObservedRunningTime="2026-01-22 15:40:58.643781391 +0000 UTC m=+1005.405308301" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.649283 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" event={"ID":"f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5","Type":"ContainerStarted","Data":"39fe22a00f41002ecf12e40aeff218513a9c45b82bd330862c410c5db3d2bf93"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.660018 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.675451 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" podStartSLOduration=6.998232161 podStartE2EDuration="44.675433205s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.544919003 +0000 UTC m=+965.306445913" lastFinishedPulling="2026-01-22 15:40:56.222120047 +0000 UTC m=+1002.983646957" observedRunningTime="2026-01-22 15:40:58.668606444 +0000 UTC m=+1005.430133344" watchObservedRunningTime="2026-01-22 15:40:58.675433205 +0000 UTC m=+1005.436960125" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.680063 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" event={"ID":"e3094c32-0018-4519-8606-e0e3a3420425","Type":"ContainerStarted","Data":"f56c9b6f62853e7bb329c6f77bab956acfb9fb503625b7f413587268c6962f7f"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.680313 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.684398 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" event={"ID":"886df838-09b9-423d-a8b6-3a5d428a0d30","Type":"ContainerStarted","Data":"dee11897f4779dde32746518552ebb2d58dee05b16088e3f492c758e3f720961"} Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.684816 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.712343 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" podStartSLOduration=43.712329326 podStartE2EDuration="43.712329326s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:40:58.707228273 +0000 UTC m=+1005.468755183" watchObservedRunningTime="2026-01-22 15:40:58.712329326 +0000 UTC m=+1005.473856226" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.783557 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" podStartSLOduration=7.834180901 podStartE2EDuration="44.783539555s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.406969362 +0000 UTC m=+966.168496272" lastFinishedPulling="2026-01-22 15:40:56.356328016 +0000 UTC m=+1003.117854926" observedRunningTime="2026-01-22 15:40:58.769552584 +0000 UTC m=+1005.531079494" watchObservedRunningTime="2026-01-22 15:40:58.783539555 +0000 UTC m=+1005.545066465" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.807076 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" podStartSLOduration=7.540474216 podStartE2EDuration="44.807055831s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.214476175 +0000 UTC m=+965.976003085" lastFinishedPulling="2026-01-22 15:40:56.48105776 +0000 UTC m=+1003.242584700" observedRunningTime="2026-01-22 15:40:58.803634386 +0000 UTC m=+1005.565161286" watchObservedRunningTime="2026-01-22 15:40:58.807055831 +0000 UTC m=+1005.568582741" Jan 22 15:40:58 crc kubenswrapper[4825]: I0122 15:40:58.865743 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" podStartSLOduration=7.132657526 podStartE2EDuration="44.86572328s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.750369472 +0000 UTC m=+965.511896382" lastFinishedPulling="2026-01-22 15:40:56.483435196 +0000 UTC m=+1003.244962136" observedRunningTime="2026-01-22 15:40:58.843437088 +0000 UTC m=+1005.604963998" watchObservedRunningTime="2026-01-22 15:40:58.86572328 +0000 UTC m=+1005.627250190" Jan 22 15:41:04 crc kubenswrapper[4825]: E0122 15:41:04.519576 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" podUID="068b0c83-8bef-4835-871c-317c62e88f50" Jan 22 15:41:04 crc kubenswrapper[4825]: I0122 15:41:04.996469 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-wtvfh" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.034506 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-bj5dt" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.051307 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-p8542" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.284348 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-j9zxq" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.284439 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-swmt9" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.284589 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fntv2" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.363807 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-w2xrz" Jan 22 15:41:05 crc kubenswrapper[4825]: I0122 15:41:05.495314 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-8jv66" Jan 22 15:41:06 crc kubenswrapper[4825]: I0122 15:41:06.114204 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-5js5c" Jan 22 15:41:06 crc kubenswrapper[4825]: I0122 15:41:06.114280 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tfd8w" Jan 22 15:41:06 crc kubenswrapper[4825]: I0122 15:41:06.150432 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hxvkd" Jan 22 15:41:06 crc kubenswrapper[4825]: E0122 15:41:06.576571 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.17:5001/openstack-k8s-operators/telemetry-operator:eb64f15362ce8fe083224b8876330e95b4455acc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" podUID="059620d7-dcad-4c62-804b-f92566f0fd85" Jan 22 15:41:06 crc kubenswrapper[4825]: I0122 15:41:06.584391 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-j6kq4" Jan 22 15:41:06 crc kubenswrapper[4825]: I0122 15:41:06.964788 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5wxzj" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.252379 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-9c9tx" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.315583 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bmm4t" Jan 22 15:41:07 crc kubenswrapper[4825]: E0122 15:41:07.518593 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" podUID="022dd12c-03b8-43c4-92db-1a7654fcffeb" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.956630 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" event={"ID":"d369dfbc-d830-4221-aca1-386666bca9a7","Type":"ContainerStarted","Data":"a1666d85d142230bf1e3081f096008d5fce2d4df6045b0bf7c430351f26d3378"} Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.956967 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.958894 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" event={"ID":"10a92b09-1701-49e1-bb4c-e715ddf9ff4f","Type":"ContainerStarted","Data":"72d6a30746ac2b5d40d2d089ca38c5a03185d2fcfdece6519bf86f54f5a17415"} Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.958946 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.960466 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" event={"ID":"83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c","Type":"ContainerStarted","Data":"08b5ae51ec20984c1f052d8ba8cb56e732f3bcb7c12c8dff649b52a6c407988c"} Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.961955 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" event={"ID":"aa7fc7cf-0d1f-4caa-905b-add971620c70","Type":"ContainerStarted","Data":"da16d18ffebd1d383c76038f57fb9f829ccb6d84a35f4fd5d82f107b6e46e092"} Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.962158 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.995014 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r7vn5" podStartSLOduration=5.243329773 podStartE2EDuration="52.995000217s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.495736652 +0000 UTC m=+966.257263562" lastFinishedPulling="2026-01-22 15:41:07.247407096 +0000 UTC m=+1014.008934006" observedRunningTime="2026-01-22 15:41:07.993082864 +0000 UTC m=+1014.754609774" watchObservedRunningTime="2026-01-22 15:41:07.995000217 +0000 UTC m=+1014.756527127" Jan 22 15:41:07 crc kubenswrapper[4825]: I0122 15:41:07.996953 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" podStartSLOduration=43.918060158 podStartE2EDuration="53.996947432s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:57.187646236 +0000 UTC m=+1003.949173146" lastFinishedPulling="2026-01-22 15:41:07.26653351 +0000 UTC m=+1014.028060420" observedRunningTime="2026-01-22 15:41:07.975113142 +0000 UTC m=+1014.736640052" watchObservedRunningTime="2026-01-22 15:41:07.996947432 +0000 UTC m=+1014.758474342" Jan 22 15:41:08 crc kubenswrapper[4825]: I0122 15:41:08.007383 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" podStartSLOduration=5.509717852 podStartE2EDuration="54.007366343s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:18.749707503 +0000 UTC m=+965.511234413" lastFinishedPulling="2026-01-22 15:41:07.247355994 +0000 UTC m=+1014.008882904" observedRunningTime="2026-01-22 15:41:08.004564804 +0000 UTC m=+1014.766091714" watchObservedRunningTime="2026-01-22 15:41:08.007366343 +0000 UTC m=+1014.768893253" Jan 22 15:41:08 crc kubenswrapper[4825]: I0122 15:41:08.038697 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" podStartSLOduration=43.997347582 podStartE2EDuration="54.038672957s" podCreationTimestamp="2026-01-22 15:40:14 +0000 UTC" firstStartedPulling="2026-01-22 15:40:57.20569361 +0000 UTC m=+1003.967220520" lastFinishedPulling="2026-01-22 15:41:07.247018975 +0000 UTC m=+1014.008545895" observedRunningTime="2026-01-22 15:41:08.034792459 +0000 UTC m=+1014.796319379" watchObservedRunningTime="2026-01-22 15:41:08.038672957 +0000 UTC m=+1014.800199877" Jan 22 15:41:12 crc kubenswrapper[4825]: I0122 15:41:12.387758 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-848467994c-vvklq" Jan 22 15:41:12 crc kubenswrapper[4825]: I0122 15:41:12.522131 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w" Jan 22 15:41:15 crc kubenswrapper[4825]: I0122 15:41:15.980775 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4j7zw" Jan 22 15:41:20 crc kubenswrapper[4825]: I0122 15:41:20.076684 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" event={"ID":"059620d7-dcad-4c62-804b-f92566f0fd85","Type":"ContainerStarted","Data":"e1e373220c2391739172b939c64c2634a6c72767eb9bef4adcbd1a2eb353ca21"} Jan 22 15:41:20 crc kubenswrapper[4825]: I0122 15:41:20.077459 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:41:20 crc kubenswrapper[4825]: I0122 15:41:20.078398 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" event={"ID":"068b0c83-8bef-4835-871c-317c62e88f50","Type":"ContainerStarted","Data":"d8daf0bf6f5cc604c9daa42f6ef66800401c992ecf7a9a98630085d71dd69dc6"} Jan 22 15:41:20 crc kubenswrapper[4825]: I0122 15:41:20.078591 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:41:20 crc kubenswrapper[4825]: I0122 15:41:20.102486 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" podStartSLOduration=5.025575993 podStartE2EDuration="1m5.102468644s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.501594116 +0000 UTC m=+966.263121036" lastFinishedPulling="2026-01-22 15:41:19.578486777 +0000 UTC m=+1026.340013687" observedRunningTime="2026-01-22 15:41:20.102002491 +0000 UTC m=+1026.863529401" watchObservedRunningTime="2026-01-22 15:41:20.102468644 +0000 UTC m=+1026.863995554" Jan 22 15:41:20 crc kubenswrapper[4825]: I0122 15:41:20.122706 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" podStartSLOduration=5.680310071 podStartE2EDuration="1m5.122684789s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.541665495 +0000 UTC m=+966.303192405" lastFinishedPulling="2026-01-22 15:41:18.984040213 +0000 UTC m=+1025.745567123" observedRunningTime="2026-01-22 15:41:20.115482367 +0000 UTC m=+1026.877009297" watchObservedRunningTime="2026-01-22 15:41:20.122684789 +0000 UTC m=+1026.884211709" Jan 22 15:41:21 crc kubenswrapper[4825]: I0122 15:41:21.479032 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-c6r4m" Jan 22 15:41:23 crc kubenswrapper[4825]: I0122 15:41:23.103578 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" event={"ID":"022dd12c-03b8-43c4-92db-1a7654fcffeb","Type":"ContainerStarted","Data":"70c04af125ef53a816a7b77ba06d2df7980437e0467207de211c1a21d6ffff10"} Jan 22 15:41:23 crc kubenswrapper[4825]: I0122 15:41:23.105067 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:41:23 crc kubenswrapper[4825]: I0122 15:41:23.127490 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" podStartSLOduration=5.685409423 podStartE2EDuration="1m8.127472671s" podCreationTimestamp="2026-01-22 15:40:15 +0000 UTC" firstStartedPulling="2026-01-22 15:40:19.528150407 +0000 UTC m=+966.289677317" lastFinishedPulling="2026-01-22 15:41:21.970213645 +0000 UTC m=+1028.731740565" observedRunningTime="2026-01-22 15:41:23.118966183 +0000 UTC m=+1029.880493093" watchObservedRunningTime="2026-01-22 15:41:23.127472671 +0000 UTC m=+1029.888999581" Jan 22 15:41:27 crc kubenswrapper[4825]: I0122 15:41:27.594136 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fbc679d4d-92fqf" Jan 22 15:41:27 crc kubenswrapper[4825]: I0122 15:41:27.642729 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-86476" Jan 22 15:41:27 crc kubenswrapper[4825]: I0122 15:41:27.972738 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-4ghsm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.259831 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dv8gm"] Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.264517 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.269204 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.269396 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.269516 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.269659 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gqr6w" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.281118 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dv8gm"] Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.329807 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hwvwq"] Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.331020 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.333887 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.354478 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hwvwq"] Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.406194 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f5c5372-5119-4cb1-a849-ff822718bd40-config\") pod \"dnsmasq-dns-675f4bcbfc-dv8gm\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.406315 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp24m\" (UniqueName: \"kubernetes.io/projected/4f5c5372-5119-4cb1-a849-ff822718bd40-kube-api-access-tp24m\") pod \"dnsmasq-dns-675f4bcbfc-dv8gm\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.507685 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp24m\" (UniqueName: \"kubernetes.io/projected/4f5c5372-5119-4cb1-a849-ff822718bd40-kube-api-access-tp24m\") pod \"dnsmasq-dns-675f4bcbfc-dv8gm\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.507737 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.507772 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f5c5372-5119-4cb1-a849-ff822718bd40-config\") pod \"dnsmasq-dns-675f4bcbfc-dv8gm\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.507790 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4kb\" (UniqueName: \"kubernetes.io/projected/e25af55c-8182-4883-b6f3-4b9a937d598c-kube-api-access-2w4kb\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.507819 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-config\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.508779 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f5c5372-5119-4cb1-a849-ff822718bd40-config\") pod \"dnsmasq-dns-675f4bcbfc-dv8gm\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.527030 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp24m\" (UniqueName: \"kubernetes.io/projected/4f5c5372-5119-4cb1-a849-ff822718bd40-kube-api-access-tp24m\") pod \"dnsmasq-dns-675f4bcbfc-dv8gm\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.581503 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.609394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.609480 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w4kb\" (UniqueName: \"kubernetes.io/projected/e25af55c-8182-4883-b6f3-4b9a937d598c-kube-api-access-2w4kb\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.610038 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-config\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.610809 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.612824 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-config\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.636163 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w4kb\" (UniqueName: \"kubernetes.io/projected/e25af55c-8182-4883-b6f3-4b9a937d598c-kube-api-access-2w4kb\") pod \"dnsmasq-dns-78dd6ddcc-hwvwq\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:45 crc kubenswrapper[4825]: I0122 15:41:45.649033 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:41:46 crc kubenswrapper[4825]: I0122 15:41:46.130932 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dv8gm"] Jan 22 15:41:46 crc kubenswrapper[4825]: I0122 15:41:46.187861 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hwvwq"] Jan 22 15:41:46 crc kubenswrapper[4825]: W0122 15:41:46.193139 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode25af55c_8182_4883_b6f3_4b9a937d598c.slice/crio-3526a1fff1c43471899ce6e77ad0aabfc21af3a5004cd584a811ccc809ff7710 WatchSource:0}: Error finding container 3526a1fff1c43471899ce6e77ad0aabfc21af3a5004cd584a811ccc809ff7710: Status 404 returned error can't find the container with id 3526a1fff1c43471899ce6e77ad0aabfc21af3a5004cd584a811ccc809ff7710 Jan 22 15:41:46 crc kubenswrapper[4825]: I0122 15:41:46.484591 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" event={"ID":"e25af55c-8182-4883-b6f3-4b9a937d598c","Type":"ContainerStarted","Data":"3526a1fff1c43471899ce6e77ad0aabfc21af3a5004cd584a811ccc809ff7710"} Jan 22 15:41:46 crc kubenswrapper[4825]: I0122 15:41:46.486119 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" event={"ID":"4f5c5372-5119-4cb1-a849-ff822718bd40","Type":"ContainerStarted","Data":"800481b4f83735bb5b0493085e119140ef5fa54a40d0ad0cc5e8dd7cdc872fd1"} Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.020278 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dv8gm"] Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.159862 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5p8q6"] Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.161444 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.190478 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5p8q6"] Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.329066 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-config\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.329340 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.329386 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvd7p\" (UniqueName: \"kubernetes.io/projected/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-kube-api-access-nvd7p\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.465373 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-config\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.465451 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.465494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvd7p\" (UniqueName: \"kubernetes.io/projected/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-kube-api-access-nvd7p\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.466574 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-config\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.466686 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.520033 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvd7p\" (UniqueName: \"kubernetes.io/projected/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-kube-api-access-nvd7p\") pod \"dnsmasq-dns-666b6646f7-5p8q6\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.535289 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hwvwq"] Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.540658 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.561870 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dfcq6"] Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.566210 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.575306 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dfcq6"] Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.669877 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc5nz\" (UniqueName: \"kubernetes.io/projected/8ed16250-e013-4590-a99d-55576235c7d9-kube-api-access-pc5nz\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.670235 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.670351 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-config\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.772197 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc5nz\" (UniqueName: \"kubernetes.io/projected/8ed16250-e013-4590-a99d-55576235c7d9-kube-api-access-pc5nz\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.772302 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.772322 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-config\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.774285 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-config\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.775965 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.803127 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc5nz\" (UniqueName: \"kubernetes.io/projected/8ed16250-e013-4590-a99d-55576235c7d9-kube-api-access-pc5nz\") pod \"dnsmasq-dns-57d769cc4f-dfcq6\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:48 crc kubenswrapper[4825]: I0122 15:41:48.923629 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.320000 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.326146 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.329364 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.329545 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qcdwt" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.330815 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.331189 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.331386 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.331556 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.334202 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.354278 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.391254 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5p8q6"] Jan 22 15:41:49 crc kubenswrapper[4825]: W0122 15:41:49.409635 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e73acb_b6ba_4385_aa50_f303b4aab4f2.slice/crio-0bf0bcbc47ac12c56e3919a35214922341e96e8c57db6f110ad1b4d3ff66a135 WatchSource:0}: Error finding container 0bf0bcbc47ac12c56e3919a35214922341e96e8c57db6f110ad1b4d3ff66a135: Status 404 returned error can't find the container with id 0bf0bcbc47ac12c56e3919a35214922341e96e8c57db6f110ad1b4d3ff66a135 Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.501291 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.501380 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.501445 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45e6f05d-8a80-49ca-add6-e8c41572b664-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.501513 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-config-data\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.501627 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.501687 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.502440 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45e6f05d-8a80-49ca-add6-e8c41572b664-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.502499 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.502583 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.502676 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps94\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-kube-api-access-bps94\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.502772 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.563370 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" event={"ID":"c3e73acb-b6ba-4385-aa50-f303b4aab4f2","Type":"ContainerStarted","Data":"0bf0bcbc47ac12c56e3919a35214922341e96e8c57db6f110ad1b4d3ff66a135"} Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.603814 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.603867 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45e6f05d-8a80-49ca-add6-e8c41572b664-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.603891 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.603972 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604053 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bps94\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-kube-api-access-bps94\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604084 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604127 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604197 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604231 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45e6f05d-8a80-49ca-add6-e8c41572b664-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604272 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-config-data\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.604296 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.605223 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.605890 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.618366 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.620616 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45e6f05d-8a80-49ca-add6-e8c41572b664-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.621083 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45e6f05d-8a80-49ca-add6-e8c41572b664-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.626092 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.626652 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-config-data\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.627815 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.631064 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.631115 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e00e5ecb6ed7059a52e386faabffab5919b9b0484b9559a256c53a584854955/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.633780 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.634767 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bps94\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-kube-api-access-bps94\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.745117 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.746394 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.750636 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-njvz2" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.751339 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.751387 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.751564 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.754541 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.754830 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.754997 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.800132 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.821176 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " pod="openstack/rabbitmq-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.868614 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dfcq6"] Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933456 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933504 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/215992ea-1abc-44d0-925b-799eb87bcc09-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933538 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933555 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmkrj\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-kube-api-access-tmkrj\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933588 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933668 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933693 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933714 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933734 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933764 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:49 crc kubenswrapper[4825]: I0122 15:41:49.933786 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/215992ea-1abc-44d0-925b-799eb87bcc09-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.002334 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073792 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073854 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073876 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073893 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073924 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073946 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/215992ea-1abc-44d0-925b-799eb87bcc09-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.073998 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.074015 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/215992ea-1abc-44d0-925b-799eb87bcc09-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.074031 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.074046 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmkrj\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-kube-api-access-tmkrj\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.074076 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.074587 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.076538 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.076690 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.104058 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.105888 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.105925 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5cc72a44005dd2703cc9543bdd999652809794a76249037f23b6c30ba242223/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.111632 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/215992ea-1abc-44d0-925b-799eb87bcc09-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.111727 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.114743 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.114785 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.115014 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmkrj\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-kube-api-access-tmkrj\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.115393 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/215992ea-1abc-44d0-925b-799eb87bcc09-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.243320 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.414937 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.591683 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" event={"ID":"8ed16250-e013-4590-a99d-55576235c7d9","Type":"ContainerStarted","Data":"bd32b52879303d81d820ae2f4f99dfa88486ae27d60f07ee8b22f894b1f93c13"} Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.647362 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:41:50 crc kubenswrapper[4825]: W0122 15:41:50.656430 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45e6f05d_8a80_49ca_add6_e8c41572b664.slice/crio-e3ceee48e9c2e3884952150581d52336200386533246c6730f047e4b1fbdd2dc WatchSource:0}: Error finding container e3ceee48e9c2e3884952150581d52336200386533246c6730f047e4b1fbdd2dc: Status 404 returned error can't find the container with id e3ceee48e9c2e3884952150581d52336200386533246c6730f047e4b1fbdd2dc Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.935402 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.940238 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.942252 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kvvbq" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.943829 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.943234 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.944840 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.965503 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 15:41:50 crc kubenswrapper[4825]: I0122 15:41:50.967940 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.021996 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.108608 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.108824 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.108936 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.109125 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.109179 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-kolla-config\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.109301 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-config-data-default\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.109343 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.109405 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmgf\" (UniqueName: \"kubernetes.io/projected/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-kube-api-access-6zmgf\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.231623 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232041 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232126 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232145 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-kolla-config\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232193 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-config-data-default\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232218 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232270 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zmgf\" (UniqueName: \"kubernetes.io/projected/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-kube-api-access-6zmgf\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.232293 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.234240 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-config-data-default\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.235021 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.235118 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-kolla-config\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.237640 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.237667 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.237735 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/101f072aae2702dade3695ec380f0f02ed5ad0267a3c12c22161221cec25e34a/globalmount\"" pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.240412 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.240412 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.253013 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zmgf\" (UniqueName: \"kubernetes.io/projected/6b65dd5d-6fe0-4cec-a8d4-d05b099607af-kube-api-access-6zmgf\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.295804 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-deb9e315-dd50-4509-88cd-aa2a2e0d4941\") pod \"openstack-galera-0\" (UID: \"6b65dd5d-6fe0-4cec-a8d4-d05b099607af\") " pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.356166 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.609720 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"215992ea-1abc-44d0-925b-799eb87bcc09","Type":"ContainerStarted","Data":"84fcdb122a66fc7642a4594c94dba1cabd8125136cd4440e782e4b0be2113eec"} Jan 22 15:41:51 crc kubenswrapper[4825]: I0122 15:41:51.614775 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45e6f05d-8a80-49ca-add6-e8c41572b664","Type":"ContainerStarted","Data":"e3ceee48e9c2e3884952150581d52336200386533246c6730f047e4b1fbdd2dc"} Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.017872 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.020044 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.022430 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.023031 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.023237 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.023667 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-klmnw" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.026519 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.084642 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 15:41:52 crc kubenswrapper[4825]: W0122 15:41:52.113169 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b65dd5d_6fe0_4cec_a8d4_d05b099607af.slice/crio-2a39d57c229cedc2354c4223b3b4a7003a7d55044247cb30e95c1685cad4f327 WatchSource:0}: Error finding container 2a39d57c229cedc2354c4223b3b4a7003a7d55044247cb30e95c1685cad4f327: Status 404 returned error can't find the container with id 2a39d57c229cedc2354c4223b3b4a7003a7d55044247cb30e95c1685cad4f327 Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.160850 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c562767d-1bda-4a9f-beec-5629395ca332-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.160960 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.161051 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.161067 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c562767d-1bda-4a9f-beec-5629395ca332-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.161140 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.161197 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c562767d-1bda-4a9f-beec-5629395ca332-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.161216 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.161237 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpsq7\" (UniqueName: \"kubernetes.io/projected/c562767d-1bda-4a9f-beec-5629395ca332-kube-api-access-hpsq7\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.262843 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c562767d-1bda-4a9f-beec-5629395ca332-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.262912 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.262929 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsq7\" (UniqueName: \"kubernetes.io/projected/c562767d-1bda-4a9f-beec-5629395ca332-kube-api-access-hpsq7\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.262961 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c562767d-1bda-4a9f-beec-5629395ca332-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.264492 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c562767d-1bda-4a9f-beec-5629395ca332-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.264830 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.264867 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.264896 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c562767d-1bda-4a9f-beec-5629395ca332-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.266794 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.267427 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.268403 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.268549 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.268587 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8d50ec670cbb9681d82d2ef3648f96107528aea257ee8369b1339c52b2f0bc8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.270071 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c562767d-1bda-4a9f-beec-5629395ca332-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.274581 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c562767d-1bda-4a9f-beec-5629395ca332-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.274788 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c562767d-1bda-4a9f-beec-5629395ca332-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.359071 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpsq7\" (UniqueName: \"kubernetes.io/projected/c562767d-1bda-4a9f-beec-5629395ca332-kube-api-access-hpsq7\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.400068 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de64becd-9d26-4d2a-bbd1-ef34e73f7122\") pod \"openstack-cell1-galera-0\" (UID: \"c562767d-1bda-4a9f-beec-5629395ca332\") " pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.470167 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.472909 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.494747 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vtqzg" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.504411 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.504673 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.518472 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.904492 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f84t\" (UniqueName: \"kubernetes.io/projected/b85d0578-2876-4355-b5f7-7412f59eb278-kube-api-access-4f84t\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.904562 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85d0578-2876-4355-b5f7-7412f59eb278-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.904669 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85d0578-2876-4355-b5f7-7412f59eb278-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.904732 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b85d0578-2876-4355-b5f7-7412f59eb278-config-data\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.904797 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b85d0578-2876-4355-b5f7-7412f59eb278-kolla-config\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.905288 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 15:41:52 crc kubenswrapper[4825]: I0122 15:41:52.942735 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6b65dd5d-6fe0-4cec-a8d4-d05b099607af","Type":"ContainerStarted","Data":"2a39d57c229cedc2354c4223b3b4a7003a7d55044247cb30e95c1685cad4f327"} Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.007126 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85d0578-2876-4355-b5f7-7412f59eb278-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.007213 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b85d0578-2876-4355-b5f7-7412f59eb278-config-data\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.007319 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b85d0578-2876-4355-b5f7-7412f59eb278-kolla-config\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.007447 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f84t\" (UniqueName: \"kubernetes.io/projected/b85d0578-2876-4355-b5f7-7412f59eb278-kube-api-access-4f84t\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.007510 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85d0578-2876-4355-b5f7-7412f59eb278-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.009871 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b85d0578-2876-4355-b5f7-7412f59eb278-config-data\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.011008 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b85d0578-2876-4355-b5f7-7412f59eb278-kolla-config\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.018516 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85d0578-2876-4355-b5f7-7412f59eb278-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.036259 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85d0578-2876-4355-b5f7-7412f59eb278-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.084944 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f84t\" (UniqueName: \"kubernetes.io/projected/b85d0578-2876-4355-b5f7-7412f59eb278-kube-api-access-4f84t\") pod \"memcached-0\" (UID: \"b85d0578-2876-4355-b5f7-7412f59eb278\") " pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.098377 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 15:41:53 crc kubenswrapper[4825]: I0122 15:41:53.989012 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 15:41:54 crc kubenswrapper[4825]: I0122 15:41:54.186669 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 15:41:54 crc kubenswrapper[4825]: W0122 15:41:54.224739 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc562767d_1bda_4a9f_beec_5629395ca332.slice/crio-2c722d454e810850bef57693795873ca5098721a1527cc3fe64d2ae66f0913b6 WatchSource:0}: Error finding container 2c722d454e810850bef57693795873ca5098721a1527cc3fe64d2ae66f0913b6: Status 404 returned error can't find the container with id 2c722d454e810850bef57693795873ca5098721a1527cc3fe64d2ae66f0913b6 Jan 22 15:41:54 crc kubenswrapper[4825]: I0122 15:41:54.889190 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:41:54 crc kubenswrapper[4825]: I0122 15:41:54.890765 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 15:41:54 crc kubenswrapper[4825]: I0122 15:41:54.895430 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wgxsw" Jan 22 15:41:54 crc kubenswrapper[4825]: I0122 15:41:54.897041 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.014437 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7mg\" (UniqueName: \"kubernetes.io/projected/684234f5-b409-42a4-9494-52a0565b000c-kube-api-access-wx7mg\") pod \"kube-state-metrics-0\" (UID: \"684234f5-b409-42a4-9494-52a0565b000c\") " pod="openstack/kube-state-metrics-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.023616 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b85d0578-2876-4355-b5f7-7412f59eb278","Type":"ContainerStarted","Data":"86c4688904ed2883660530900ceba3e8162b311e421d30fbfc6355e4627468db"} Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.027667 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c562767d-1bda-4a9f-beec-5629395ca332","Type":"ContainerStarted","Data":"2c722d454e810850bef57693795873ca5098721a1527cc3fe64d2ae66f0913b6"} Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.116623 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx7mg\" (UniqueName: \"kubernetes.io/projected/684234f5-b409-42a4-9494-52a0565b000c-kube-api-access-wx7mg\") pod \"kube-state-metrics-0\" (UID: \"684234f5-b409-42a4-9494-52a0565b000c\") " pod="openstack/kube-state-metrics-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.211233 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx7mg\" (UniqueName: \"kubernetes.io/projected/684234f5-b409-42a4-9494-52a0565b000c-kube-api-access-wx7mg\") pod \"kube-state-metrics-0\" (UID: \"684234f5-b409-42a4-9494-52a0565b000c\") " pod="openstack/kube-state-metrics-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.227493 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.832908 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.835256 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.867091 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.867358 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.867527 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-qh7kg" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.867544 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.867924 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.872252 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.972832 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.972973 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.973034 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.973149 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ks2\" (UniqueName: \"kubernetes.io/projected/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-kube-api-access-n8ks2\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.973308 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.973366 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:55 crc kubenswrapper[4825]: I0122 15:41:55.973485 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.076926 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.076993 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.077059 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ks2\" (UniqueName: \"kubernetes.io/projected/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-kube-api-access-n8ks2\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.077126 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.077149 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.077196 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.077235 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.081451 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.090380 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.090572 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.091081 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.091881 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.117510 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.136169 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ks2\" (UniqueName: \"kubernetes.io/projected/1f7994d5-5cc8-4830-bcd1-9f63b9109a09-kube-api-access-n8ks2\") pod \"alertmanager-metric-storage-0\" (UID: \"1f7994d5-5cc8-4830-bcd1-9f63b9109a09\") " pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.176385 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.356332 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.358295 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.374774 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.374952 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.375038 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.375145 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.375264 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.375309 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.375277 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.375428 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g98sx" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.434800 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487353 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhp4r\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-kube-api-access-lhp4r\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487449 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487502 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487547 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487574 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487624 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487671 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487725 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487767 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.487803 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589618 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589689 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589718 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589759 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589807 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589836 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.589865 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.590218 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp4r\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-kube-api-access-lhp4r\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.590288 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.590341 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.592120 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.592122 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.593064 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.593689 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.596757 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.597052 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.597093 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f384ca950549f4e6139e9d3c1ffd101c55a7a0c2a28a49f66cc0b4e36aaf3b93/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.628591 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.633953 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.635402 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.638637 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp4r\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-kube-api-access-lhp4r\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.690520 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:56 crc kubenswrapper[4825]: I0122 15:41:56.992549 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.941517 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.954767 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.966337 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.966673 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gph75" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.966940 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.967136 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.972183 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 15:41:58 crc kubenswrapper[4825]: I0122 15:41:58.989863 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.401944 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975cad-bc38-442f-acdd-0c8fa4f3b429-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.405724 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.405820 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.405863 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.405911 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjspv\" (UniqueName: \"kubernetes.io/projected/4d975cad-bc38-442f-acdd-0c8fa4f3b429-kube-api-access-rjspv\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.406277 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d975cad-bc38-442f-acdd-0c8fa4f3b429-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.406329 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.406379 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d975cad-bc38-442f-acdd-0c8fa4f3b429-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508434 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d975cad-bc38-442f-acdd-0c8fa4f3b429-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508487 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508513 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d975cad-bc38-442f-acdd-0c8fa4f3b429-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508558 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508582 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975cad-bc38-442f-acdd-0c8fa4f3b429-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508612 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508637 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.508662 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjspv\" (UniqueName: \"kubernetes.io/projected/4d975cad-bc38-442f-acdd-0c8fa4f3b429-kube-api-access-rjspv\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.512083 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.512162 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f58cadc3d9f93f05bc44617e97a04b765f60f67d8ad1007403f0a5791470e39/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.514621 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d975cad-bc38-442f-acdd-0c8fa4f3b429-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.515347 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4d975cad-bc38-442f-acdd-0c8fa4f3b429-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.515515 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975cad-bc38-442f-acdd-0c8fa4f3b429-config\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.528004 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.534089 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.559954 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjspv\" (UniqueName: \"kubernetes.io/projected/4d975cad-bc38-442f-acdd-0c8fa4f3b429-kube-api-access-rjspv\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.572784 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d975cad-bc38-442f-acdd-0c8fa4f3b429-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.674136 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fc165d81-4f3b-4e69-93bb-61f0cd56ece1\") pod \"ovsdbserver-nb-0\" (UID: \"4d975cad-bc38-442f-acdd-0c8fa4f3b429\") " pod="openstack/ovsdbserver-nb-0" Jan 22 15:41:59 crc kubenswrapper[4825]: I0122 15:41:59.883815 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.198758 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-snszk"] Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.208417 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.210484 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-crb25" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.217843 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-snszk"] Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.225054 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.225415 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.239786 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bkgcs"] Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.243443 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.245263 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bkgcs"] Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.325897 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306a03b3-2cdb-494a-ab5b-51d80fe3586c-combined-ca-bundle\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.325953 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw552\" (UniqueName: \"kubernetes.io/projected/306a03b3-2cdb-494a-ab5b-51d80fe3586c-kube-api-access-jw552\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.326001 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-run\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.326081 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-run-ovn\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.326131 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a03b3-2cdb-494a-ab5b-51d80fe3586c-ovn-controller-tls-certs\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.326160 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-log-ovn\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.326180 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306a03b3-2cdb-494a-ab5b-51d80fe3586c-scripts\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.431322 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw552\" (UniqueName: \"kubernetes.io/projected/306a03b3-2cdb-494a-ab5b-51d80fe3586c-kube-api-access-jw552\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432352 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-run\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432458 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-run\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432528 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-run-ovn\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432570 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-etc-ovs\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432617 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-lib\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432640 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a03b3-2cdb-494a-ab5b-51d80fe3586c-ovn-controller-tls-certs\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432684 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-log-ovn\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432718 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306a03b3-2cdb-494a-ab5b-51d80fe3586c-scripts\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432869 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-log\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432908 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxj97\" (UniqueName: \"kubernetes.io/projected/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-kube-api-access-zxj97\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.432961 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-scripts\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.433074 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306a03b3-2cdb-494a-ab5b-51d80fe3586c-combined-ca-bundle\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.433554 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-run\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.433696 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-run-ovn\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.434161 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/306a03b3-2cdb-494a-ab5b-51d80fe3586c-var-log-ovn\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.435915 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306a03b3-2cdb-494a-ab5b-51d80fe3586c-scripts\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.443432 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306a03b3-2cdb-494a-ab5b-51d80fe3586c-combined-ca-bundle\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.460633 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/306a03b3-2cdb-494a-ab5b-51d80fe3586c-ovn-controller-tls-certs\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.465408 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw552\" (UniqueName: \"kubernetes.io/projected/306a03b3-2cdb-494a-ab5b-51d80fe3586c-kube-api-access-jw552\") pod \"ovn-controller-snszk\" (UID: \"306a03b3-2cdb-494a-ab5b-51d80fe3586c\") " pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.533567 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-run\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.533629 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-etc-ovs\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.533652 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-lib\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.533685 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-log\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.533708 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxj97\" (UniqueName: \"kubernetes.io/projected/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-kube-api-access-zxj97\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.533749 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-scripts\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.534339 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-log\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.534437 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-run\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.534573 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-etc-ovs\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.534743 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-var-lib\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.536576 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-scripts\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.578669 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxj97\" (UniqueName: \"kubernetes.io/projected/b08ffe2b-2e43-437b-beb1-2eb436baa4ec-kube-api-access-zxj97\") pod \"ovn-controller-ovs-bkgcs\" (UID: \"b08ffe2b-2e43-437b-beb1-2eb436baa4ec\") " pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.583099 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk" Jan 22 15:42:00 crc kubenswrapper[4825]: I0122 15:42:00.599876 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.048386 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.051424 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.056319 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.056541 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.056867 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.057061 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jk7q5" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.064196 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.240562 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/892f29f1-29c4-4f1d-83af-660bf2983766-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.240938 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.241016 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.241046 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/892f29f1-29c4-4f1d-83af-660bf2983766-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.241070 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qdh\" (UniqueName: \"kubernetes.io/projected/892f29f1-29c4-4f1d-83af-660bf2983766-kube-api-access-p9qdh\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.241124 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.241195 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892f29f1-29c4-4f1d-83af-660bf2983766-config\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.241245 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.342933 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/892f29f1-29c4-4f1d-83af-660bf2983766-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343029 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343078 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343106 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/892f29f1-29c4-4f1d-83af-660bf2983766-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343130 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qdh\" (UniqueName: \"kubernetes.io/projected/892f29f1-29c4-4f1d-83af-660bf2983766-kube-api-access-p9qdh\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343167 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343227 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892f29f1-29c4-4f1d-83af-660bf2983766-config\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343270 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.343960 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/892f29f1-29c4-4f1d-83af-660bf2983766-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.345347 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892f29f1-29c4-4f1d-83af-660bf2983766-config\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.349246 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.349288 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55521f394d2a1b069aca647d572008e53b9ff7e85c46437af678a935e82e4141/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.349482 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.350574 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.352469 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/892f29f1-29c4-4f1d-83af-660bf2983766-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.353233 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/892f29f1-29c4-4f1d-83af-660bf2983766-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.367922 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qdh\" (UniqueName: \"kubernetes.io/projected/892f29f1-29c4-4f1d-83af-660bf2983766-kube-api-access-p9qdh\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.419842 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b33993f-a131-41ee-98ff-269fa5b2b1c4\") pod \"ovsdbserver-sb-0\" (UID: \"892f29f1-29c4-4f1d-83af-660bf2983766\") " pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:03 crc kubenswrapper[4825]: I0122 15:42:03.693102 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.304193 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.305965 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.312710 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-bcz2b" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.312778 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.312810 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.313094 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.315495 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.330138 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.403228 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.403286 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dba34c46-ef4e-4315-8b1d-1f1946e329a7-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.403367 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svst\" (UniqueName: \"kubernetes.io/projected/dba34c46-ef4e-4315-8b1d-1f1946e329a7-kube-api-access-2svst\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.403441 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.403753 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.506062 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.506146 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.506180 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dba34c46-ef4e-4315-8b1d-1f1946e329a7-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.506219 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svst\" (UniqueName: \"kubernetes.io/projected/dba34c46-ef4e-4315-8b1d-1f1946e329a7-kube-api-access-2svst\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.506306 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.509956 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.510120 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dba34c46-ef4e-4315-8b1d-1f1946e329a7-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.513557 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.516571 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/dba34c46-ef4e-4315-8b1d-1f1946e329a7-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.549401 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.549922 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svst\" (UniqueName: \"kubernetes.io/projected/dba34c46-ef4e-4315-8b1d-1f1946e329a7-kube-api-access-2svst\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-j9kkt\" (UID: \"dba34c46-ef4e-4315-8b1d-1f1946e329a7\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.550830 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.555409 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.555515 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.555656 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.559907 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.629895 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.663938 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.666635 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.672068 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.672782 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.704762 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.709892 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.710011 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkhv\" (UniqueName: \"kubernetes.io/projected/28ded780-a2df-4624-807e-2426859b0a95-kube-api-access-flkhv\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.710119 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ded780-a2df-4624-807e-2426859b0a95-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.710225 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.710300 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.710366 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813202 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813248 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9lp\" (UniqueName: \"kubernetes.io/projected/14563680-8847-4136-9955-836dd8331930-kube-api-access-pr9lp\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813299 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813325 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813372 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813400 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813421 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14563680-8847-4136-9955-836dd8331930-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813443 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flkhv\" (UniqueName: \"kubernetes.io/projected/28ded780-a2df-4624-807e-2426859b0a95-kube-api-access-flkhv\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813483 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813506 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.813530 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ded780-a2df-4624-807e-2426859b0a95-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.814467 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ded780-a2df-4624-807e-2426859b0a95-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.815471 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.841156 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.842769 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.849052 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/28ded780-a2df-4624-807e-2426859b0a95-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.873822 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flkhv\" (UniqueName: \"kubernetes.io/projected/28ded780-a2df-4624-807e-2426859b0a95-kube-api-access-flkhv\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-8vxr8\" (UID: \"28ded780-a2df-4624-807e-2426859b0a95\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.911477 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.918503 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14563680-8847-4136-9955-836dd8331930-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.919732 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14563680-8847-4136-9955-836dd8331930-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.919791 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.919851 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.920627 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr9lp\" (UniqueName: \"kubernetes.io/projected/14563680-8847-4136-9955-836dd8331930-kube-api-access-pr9lp\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.920653 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.933740 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.934378 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.921712 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14563680-8847-4136-9955-836dd8331930-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.949846 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm"] Jan 22 15:42:07 crc kubenswrapper[4825]: I0122 15:42:07.951101 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.965387 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.965600 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.965650 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.965767 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.965957 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.966077 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.975611 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr9lp\" (UniqueName: \"kubernetes.io/projected/14563680-8847-4136-9955-836dd8331930-kube-api-access-pr9lp\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-r4krs\" (UID: \"14563680-8847-4136-9955-836dd8331930\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.987218 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.988648 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.989524 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:07.997430 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-vkssp" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.003378 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.126593 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133280 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133360 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133406 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133444 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133477 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nphdr\" (UniqueName: \"kubernetes.io/projected/d0112c91-a6fe-4b93-aff9-49f108a64603-kube-api-access-nphdr\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133538 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133567 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133593 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133618 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133652 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133689 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133728 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133755 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133788 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvbg\" (UniqueName: \"kubernetes.io/projected/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-kube-api-access-4tvbg\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133817 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133846 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133881 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.133921 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235193 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235239 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nphdr\" (UniqueName: \"kubernetes.io/projected/d0112c91-a6fe-4b93-aff9-49f108a64603-kube-api-access-nphdr\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235285 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235307 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235327 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235346 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235376 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235402 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235439 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235460 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235481 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvbg\" (UniqueName: \"kubernetes.io/projected/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-kube-api-access-4tvbg\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235495 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235511 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235525 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235551 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235624 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235642 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.235665 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.236350 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.236538 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.236650 4825 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.236697 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tls-secret podName:d0112c91-a6fe-4b93-aff9-49f108a64603 nodeName:}" failed. No retries permitted until 2026-01-22 15:42:08.736680598 +0000 UTC m=+1075.498207508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tls-secret") pod "cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" (UID: "d0112c91-a6fe-4b93-aff9-49f108a64603") : secret "cloudkitty-lokistack-gateway-http" not found Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.236750 4825 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.236806 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tls-secret podName:c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc nodeName:}" failed. No retries permitted until 2026-01-22 15:42:08.736788361 +0000 UTC m=+1075.498315351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tls-secret") pod "cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" (UID: "c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc") : secret "cloudkitty-lokistack-gateway-http" not found Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.236944 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.237065 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.237113 4825 configmap.go:193] Couldn't get configMap openstack/cloudkitty-lokistack-gateway-ca-bundle: configmap "cloudkitty-lokistack-gateway-ca-bundle" not found Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.237142 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-ca-bundle podName:d0112c91-a6fe-4b93-aff9-49f108a64603 nodeName:}" failed. No retries permitted until 2026-01-22 15:42:08.737133121 +0000 UTC m=+1075.498660031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloudkitty-lokistack-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-ca-bundle") pod "cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" (UID: "d0112c91-a6fe-4b93-aff9-49f108a64603") : configmap "cloudkitty-lokistack-gateway-ca-bundle" not found Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.237481 4825 configmap.go:193] Couldn't get configMap openstack/cloudkitty-lokistack-gateway-ca-bundle: configmap "cloudkitty-lokistack-gateway-ca-bundle" not found Jan 22 15:42:08 crc kubenswrapper[4825]: E0122 15:42:08.237564 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-ca-bundle podName:c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc nodeName:}" failed. No retries permitted until 2026-01-22 15:42:08.737537562 +0000 UTC m=+1075.499064472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloudkitty-lokistack-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-ca-bundle") pod "cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" (UID: "c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc") : configmap "cloudkitty-lokistack-gateway-ca-bundle" not found Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.237859 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.237907 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.238062 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.238435 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.240858 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.241462 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.242834 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.252740 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.264614 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nphdr\" (UniqueName: \"kubernetes.io/projected/d0112c91-a6fe-4b93-aff9-49f108a64603-kube-api-access-nphdr\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.265268 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvbg\" (UniqueName: \"kubernetes.io/projected/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-kube-api-access-4tvbg\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.501028 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.502895 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.507256 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.507335 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.516812 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.646108 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647462 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57cba631-503b-4795-8463-3d1e50957d58-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647546 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647605 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647653 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzl9\" (UniqueName: \"kubernetes.io/projected/57cba631-503b-4795-8463-3d1e50957d58-kube-api-access-zxzl9\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647705 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647730 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647760 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.647852 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.650918 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.655333 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.656124 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.666593 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.745083 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.747296 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750708 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750749 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750780 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzl9\" (UniqueName: \"kubernetes.io/projected/57cba631-503b-4795-8463-3d1e50957d58-kube-api-access-zxzl9\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750801 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4tx\" (UniqueName: \"kubernetes.io/projected/d2333364-70e2-4c7a-933e-142e0ebed301-kube-api-access-hs4tx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750820 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2333364-70e2-4c7a-933e-142e0ebed301-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750838 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750858 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750875 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750894 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750917 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750937 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750957 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.750995 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.751021 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.751058 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.751073 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.751100 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.751142 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57cba631-503b-4795-8463-3d1e50957d58-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.751177 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.753438 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.753826 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.754396 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0112c91-a6fe-4b93-aff9-49f108a64603-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.754659 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.756164 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57cba631-503b-4795-8463-3d1e50957d58-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.756862 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.754659 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.757747 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.760399 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d0112c91-a6fe-4b93-aff9-49f108a64603-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-g56j8\" (UID: \"d0112c91-a6fe-4b93-aff9-49f108a64603\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.761225 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.762773 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.763125 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.780893 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/57cba631-503b-4795-8463-3d1e50957d58-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.785293 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzl9\" (UniqueName: \"kubernetes.io/projected/57cba631-503b-4795-8463-3d1e50957d58-kube-api-access-zxzl9\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.795568 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm\" (UID: \"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.799579 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.813082 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"57cba631-503b-4795-8463-3d1e50957d58\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852451 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs4tx\" (UniqueName: \"kubernetes.io/projected/d2333364-70e2-4c7a-933e-142e0ebed301-kube-api-access-hs4tx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852498 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852525 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2904ce28-f3f4-41ad-8612-36e924ab3d32-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852548 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2333364-70e2-4c7a-933e-142e0ebed301-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852588 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852710 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852712 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852759 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p42s\" (UniqueName: \"kubernetes.io/projected/2904ce28-f3f4-41ad-8612-36e924ab3d32-kube-api-access-9p42s\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852796 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.852816 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.853005 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.853128 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.853156 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.853323 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.853432 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.853613 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2333364-70e2-4c7a-933e-142e0ebed301-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.854463 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.856928 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.858860 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.860359 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/d2333364-70e2-4c7a-933e-142e0ebed301-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.872068 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs4tx\" (UniqueName: \"kubernetes.io/projected/d2333364-70e2-4c7a-933e-142e0ebed301-kube-api-access-hs4tx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.877270 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d2333364-70e2-4c7a-933e-142e0ebed301\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.892421 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.954903 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.954991 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.955029 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2904ce28-f3f4-41ad-8612-36e924ab3d32-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.955091 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.955152 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p42s\" (UniqueName: \"kubernetes.io/projected/2904ce28-f3f4-41ad-8612-36e924ab3d32-kube-api-access-9p42s\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.955187 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.955326 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.955514 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.956098 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.957208 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.958109 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2904ce28-f3f4-41ad-8612-36e924ab3d32-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.958417 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.960820 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.968637 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2904ce28-f3f4-41ad-8612-36e924ab3d32-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.979725 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p42s\" (UniqueName: \"kubernetes.io/projected/2904ce28-f3f4-41ad-8612-36e924ab3d32-kube-api-access-9p42s\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.982204 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.982310 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:08 crc kubenswrapper[4825]: I0122 15:42:08.983973 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"2904ce28-f3f4-41ad-8612-36e924ab3d32\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:09 crc kubenswrapper[4825]: I0122 15:42:09.223526 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.176810 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.178711 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zmgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(6b65dd5d-6fe0-4cec-a8d4-d05b099607af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.180759 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="6b65dd5d-6fe0-4cec-a8d4-d05b099607af" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.258913 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.259377 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bps94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(45e6f05d-8a80-49ca-add6-e8c41572b664): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.263295 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" Jan 22 15:42:22 crc kubenswrapper[4825]: I0122 15:42:22.669681 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8"] Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.952311 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" Jan 22 15:42:22 crc kubenswrapper[4825]: E0122 15:42:22.952454 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="6b65dd5d-6fe0-4cec-a8d4-d05b099607af" Jan 22 15:42:28 crc kubenswrapper[4825]: E0122 15:42:28.011879 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 22 15:42:28 crc kubenswrapper[4825]: E0122 15:42:28.012890 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:ncfh9dhd7h57bh8fhb7h698h5b5hb9h688h5f5hbfhfdhbfh57dh584h5ch79h674hf7h586h59ch5bh56bh5fbhb7h596h5b5h546hdfh85h5f4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f84t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(b85d0578-2876-4355-b5f7-7412f59eb278): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:28 crc kubenswrapper[4825]: E0122 15:42:28.014127 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="b85d0578-2876-4355-b5f7-7412f59eb278" Jan 22 15:42:28 crc kubenswrapper[4825]: E0122 15:42:28.045196 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 22 15:42:28 crc kubenswrapper[4825]: E0122 15:42:28.045411 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpsq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(c562767d-1bda-4a9f-beec-5629395ca332): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:28 crc kubenswrapper[4825]: E0122 15:42:28.046682 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="c562767d-1bda-4a9f-beec-5629395ca332" Jan 22 15:42:28 crc kubenswrapper[4825]: W0122 15:42:28.121422 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ded780_a2df_4624_807e_2426859b0a95.slice/crio-2d1df2e2e9e3e711930433ffd6ab282cbed23cb4d54172712ebc2a04645fea65 WatchSource:0}: Error finding container 2d1df2e2e9e3e711930433ffd6ab282cbed23cb4d54172712ebc2a04645fea65: Status 404 returned error can't find the container with id 2d1df2e2e9e3e711930433ffd6ab282cbed23cb4d54172712ebc2a04645fea65 Jan 22 15:42:28 crc kubenswrapper[4825]: I0122 15:42:28.673149 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:42:29 crc kubenswrapper[4825]: I0122 15:42:29.001229 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" event={"ID":"28ded780-a2df-4624-807e-2426859b0a95","Type":"ContainerStarted","Data":"2d1df2e2e9e3e711930433ffd6ab282cbed23cb4d54172712ebc2a04645fea65"} Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.003138 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="b85d0578-2876-4355-b5f7-7412f59eb278" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.003324 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="c562767d-1bda-4a9f-beec-5629395ca332" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.364322 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.364732 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvd7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5p8q6_openstack(c3e73acb-b6ba-4385-aa50-f303b4aab4f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.366138 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.801885 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.802482 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2w4kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-hwvwq_openstack(e25af55c-8182-4883-b6f3-4b9a937d598c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.803816 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" podUID="e25af55c-8182-4883-b6f3-4b9a937d598c" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.906183 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.906381 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp24m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dv8gm_openstack(4f5c5372-5119-4cb1-a849-ff822718bd40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.908169 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" podUID="4f5c5372-5119-4cb1-a849-ff822718bd40" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.973830 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.974140 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pc5nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-dfcq6_openstack(8ed16250-e013-4590-a99d-55576235c7d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:42:29 crc kubenswrapper[4825]: E0122 15:42:29.975602 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" podUID="8ed16250-e013-4590-a99d-55576235c7d9" Jan 22 15:42:30 crc kubenswrapper[4825]: I0122 15:42:30.019073 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerStarted","Data":"6b5a9f77e73d9348fe038ef19f8e7928d862a2b42ed704732abe21dffd8f71f7"} Jan 22 15:42:30 crc kubenswrapper[4825]: E0122 15:42:30.021606 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" podUID="8ed16250-e013-4590-a99d-55576235c7d9" Jan 22 15:42:30 crc kubenswrapper[4825]: E0122 15:42:30.021831 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" Jan 22 15:42:30 crc kubenswrapper[4825]: I0122 15:42:30.261628 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-snszk"] Jan 22 15:42:30 crc kubenswrapper[4825]: I0122 15:42:30.296585 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 22 15:42:30 crc kubenswrapper[4825]: W0122 15:42:30.298079 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f7994d5_5cc8_4830_bcd1_9f63b9109a09.slice/crio-aeb70bd35ec350b4c5144eac8da6f0d627a0ff8090c1b447bb026068f8beeda9 WatchSource:0}: Error finding container aeb70bd35ec350b4c5144eac8da6f0d627a0ff8090c1b447bb026068f8beeda9: Status 404 returned error can't find the container with id aeb70bd35ec350b4c5144eac8da6f0d627a0ff8090c1b447bb026068f8beeda9 Jan 22 15:42:30 crc kubenswrapper[4825]: I0122 15:42:30.581736 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 15:42:30 crc kubenswrapper[4825]: I0122 15:42:30.825701 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt"] Jan 22 15:42:30 crc kubenswrapper[4825]: I0122 15:42:30.983199 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.004552 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.025221 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.028296 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.055555 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.059750 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.062198 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.062765 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"215992ea-1abc-44d0-925b-799eb87bcc09","Type":"ContainerStarted","Data":"184e43136592bf3469b06dc128b988a48972055ca89cc79136bb1b491d6c7e34"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.071737 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8"] Jan 22 15:42:31 crc kubenswrapper[4825]: W0122 15:42:31.075272 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0112c91_a6fe_4b93_aff9_49f108a64603.slice/crio-108c289aaf265fc3b81d7f127ec72470838a2008771a56f33711abd98646e203 WatchSource:0}: Error finding container 108c289aaf265fc3b81d7f127ec72470838a2008771a56f33711abd98646e203: Status 404 returned error can't find the container with id 108c289aaf265fc3b81d7f127ec72470838a2008771a56f33711abd98646e203 Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.076496 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.076498 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hwvwq" event={"ID":"e25af55c-8182-4883-b6f3-4b9a937d598c","Type":"ContainerDied","Data":"3526a1fff1c43471899ce6e77ad0aabfc21af3a5004cd584a811ccc809ff7710"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.085862 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"1f7994d5-5cc8-4830-bcd1-9f63b9109a09","Type":"ContainerStarted","Data":"aeb70bd35ec350b4c5144eac8da6f0d627a0ff8090c1b447bb026068f8beeda9"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.090097 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.097630 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.140453 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" event={"ID":"dba34c46-ef4e-4315-8b1d-1f1946e329a7","Type":"ContainerStarted","Data":"55358b9b29fe9c31307aa052f2243ed88ded76e38d83a21d1bbc26403c91d409"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.146564 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" event={"ID":"4f5c5372-5119-4cb1-a849-ff822718bd40","Type":"ContainerDied","Data":"800481b4f83735bb5b0493085e119140ef5fa54a40d0ad0cc5e8dd7cdc872fd1"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.146711 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dv8gm" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.163264 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"d2333364-70e2-4c7a-933e-142e0ebed301","Type":"ContainerStarted","Data":"9acc7eeb280c55ec3c7e79ec4424e252fd772eb34b9af0246752d801bd78204d"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.168483 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" event={"ID":"14563680-8847-4136-9955-836dd8331930","Type":"ContainerStarted","Data":"e328a1aac71b4951862c672a50c146cb294b4cd20f8855e57f3cc360f1c1f9af"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.177823 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk" event={"ID":"306a03b3-2cdb-494a-ab5b-51d80fe3586c","Type":"ContainerStarted","Data":"f463a259a672f07f33cdc51df8ddde2aafbfc8ce483c65f07c1a4ae56f4565e7"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.181315 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"892f29f1-29c4-4f1d-83af-660bf2983766","Type":"ContainerStarted","Data":"84ad2ae0b82fdf32873e10fcf6358e1ca36050a411d75cc8ea7a69ca71118651"} Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.197805 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-dns-svc\") pod \"e25af55c-8182-4883-b6f3-4b9a937d598c\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.198046 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-config\") pod \"e25af55c-8182-4883-b6f3-4b9a937d598c\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.198089 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w4kb\" (UniqueName: \"kubernetes.io/projected/e25af55c-8182-4883-b6f3-4b9a937d598c-kube-api-access-2w4kb\") pod \"e25af55c-8182-4883-b6f3-4b9a937d598c\" (UID: \"e25af55c-8182-4883-b6f3-4b9a937d598c\") " Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.198130 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f5c5372-5119-4cb1-a849-ff822718bd40-config\") pod \"4f5c5372-5119-4cb1-a849-ff822718bd40\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.198168 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp24m\" (UniqueName: \"kubernetes.io/projected/4f5c5372-5119-4cb1-a849-ff822718bd40-kube-api-access-tp24m\") pod \"4f5c5372-5119-4cb1-a849-ff822718bd40\" (UID: \"4f5c5372-5119-4cb1-a849-ff822718bd40\") " Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.199961 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e25af55c-8182-4883-b6f3-4b9a937d598c" (UID: "e25af55c-8182-4883-b6f3-4b9a937d598c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.200391 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-config" (OuterVolumeSpecName: "config") pod "e25af55c-8182-4883-b6f3-4b9a937d598c" (UID: "e25af55c-8182-4883-b6f3-4b9a937d598c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.202214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5c5372-5119-4cb1-a849-ff822718bd40-config" (OuterVolumeSpecName: "config") pod "4f5c5372-5119-4cb1-a849-ff822718bd40" (UID: "4f5c5372-5119-4cb1-a849-ff822718bd40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.206570 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5c5372-5119-4cb1-a849-ff822718bd40-kube-api-access-tp24m" (OuterVolumeSpecName: "kube-api-access-tp24m") pod "4f5c5372-5119-4cb1-a849-ff822718bd40" (UID: "4f5c5372-5119-4cb1-a849-ff822718bd40"). InnerVolumeSpecName "kube-api-access-tp24m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.207302 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25af55c-8182-4883-b6f3-4b9a937d598c-kube-api-access-2w4kb" (OuterVolumeSpecName: "kube-api-access-2w4kb") pod "e25af55c-8182-4883-b6f3-4b9a937d598c" (UID: "e25af55c-8182-4883-b6f3-4b9a937d598c"). InnerVolumeSpecName "kube-api-access-2w4kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.229383 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bkgcs"] Jan 22 15:42:31 crc kubenswrapper[4825]: W0122 15:42:31.247498 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08ffe2b_2e43_437b_beb1_2eb436baa4ec.slice/crio-440e87839efdfee3563102254e238249afd932655751e67c09d9e292b43e1281 WatchSource:0}: Error finding container 440e87839efdfee3563102254e238249afd932655751e67c09d9e292b43e1281: Status 404 returned error can't find the container with id 440e87839efdfee3563102254e238249afd932655751e67c09d9e292b43e1281 Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.310661 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.310752 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e25af55c-8182-4883-b6f3-4b9a937d598c-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.310798 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w4kb\" (UniqueName: \"kubernetes.io/projected/e25af55c-8182-4883-b6f3-4b9a937d598c-kube-api-access-2w4kb\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.310816 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f5c5372-5119-4cb1-a849-ff822718bd40-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.310827 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp24m\" (UniqueName: \"kubernetes.io/projected/4f5c5372-5119-4cb1-a849-ff822718bd40-kube-api-access-tp24m\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.471965 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hwvwq"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.485756 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hwvwq"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.541249 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25af55c-8182-4883-b6f3-4b9a937d598c" path="/var/lib/kubelet/pods/e25af55c-8182-4883-b6f3-4b9a937d598c/volumes" Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.541998 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dv8gm"] Jan 22 15:42:31 crc kubenswrapper[4825]: I0122 15:42:31.542042 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dv8gm"] Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.197565 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" event={"ID":"d0112c91-a6fe-4b93-aff9-49f108a64603","Type":"ContainerStarted","Data":"108c289aaf265fc3b81d7f127ec72470838a2008771a56f33711abd98646e203"} Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.199538 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" event={"ID":"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc","Type":"ContainerStarted","Data":"aa0541bc21f0fbeffd08a563c988b0de55eb8cd65593965431e6725ab7e05244"} Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.201338 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"684234f5-b409-42a4-9494-52a0565b000c","Type":"ContainerStarted","Data":"db9b086a67871c23179e281a76ddb166dfe97552eb519629d032371bae48a844"} Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.202628 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"2904ce28-f3f4-41ad-8612-36e924ab3d32","Type":"ContainerStarted","Data":"9a7415d63009f940d06f07fd9423603248b9215ca71a9f22d894e1fbb1f697ee"} Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.205682 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"57cba631-503b-4795-8463-3d1e50957d58","Type":"ContainerStarted","Data":"8afe01f7c5787f24ff178b7a62b65c4868c6bd705c2a804ec725e224083b0b89"} Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.207130 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d975cad-bc38-442f-acdd-0c8fa4f3b429","Type":"ContainerStarted","Data":"bef7a9c73546f3d03428074b03a746804c8208c50d60569416758ef77b0bd2b1"} Jan 22 15:42:32 crc kubenswrapper[4825]: I0122 15:42:32.212362 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bkgcs" event={"ID":"b08ffe2b-2e43-437b-beb1-2eb436baa4ec","Type":"ContainerStarted","Data":"440e87839efdfee3563102254e238249afd932655751e67c09d9e292b43e1281"} Jan 22 15:42:33 crc kubenswrapper[4825]: I0122 15:42:33.529276 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5c5372-5119-4cb1-a849-ff822718bd40" path="/var/lib/kubelet/pods/4f5c5372-5119-4cb1-a849-ff822718bd40/volumes" Jan 22 15:42:41 crc kubenswrapper[4825]: I0122 15:42:41.336635 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" event={"ID":"d0112c91-a6fe-4b93-aff9-49f108a64603","Type":"ContainerStarted","Data":"ad5bb57bd253a2db7f6f5cf3e02c9c44b0ddf2c548012a3b7735f85887e75c85"} Jan 22 15:42:41 crc kubenswrapper[4825]: I0122 15:42:41.372153 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" podStartSLOduration=25.688700229 podStartE2EDuration="34.372131112s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.093790544 +0000 UTC m=+1097.855317454" lastFinishedPulling="2026-01-22 15:42:39.777221427 +0000 UTC m=+1106.538748337" observedRunningTime="2026-01-22 15:42:41.360299483 +0000 UTC m=+1108.121826403" watchObservedRunningTime="2026-01-22 15:42:41.372131112 +0000 UTC m=+1108.133658022" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.348436 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk" event={"ID":"306a03b3-2cdb-494a-ab5b-51d80fe3586c","Type":"ContainerStarted","Data":"d5553f201976371e97a86a0aeb0ab1b2aa265fd31d4af17f4b1f1889716fe54b"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.348892 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-snszk" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.350388 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"892f29f1-29c4-4f1d-83af-660bf2983766","Type":"ContainerStarted","Data":"42be2347d216f1f78ad8a9730c4889193a3698343ccebf6607c99f11a1e6d259"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.352558 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bkgcs" event={"ID":"b08ffe2b-2e43-437b-beb1-2eb436baa4ec","Type":"ContainerStarted","Data":"d883705bce5abf3e5017ef79857990546208f1625e80d835265b7d92a16e6acf"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.354223 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45e6f05d-8a80-49ca-add6-e8c41572b664","Type":"ContainerStarted","Data":"c47a51e689e8e6934dbe0f9c52428877a4be4d4087bcabd749f2d7315b443e0c"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.356363 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"684234f5-b409-42a4-9494-52a0565b000c","Type":"ContainerStarted","Data":"84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.356430 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.358607 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"d2333364-70e2-4c7a-933e-142e0ebed301","Type":"ContainerStarted","Data":"32ad3ee64d9e6d34eaa240c34de7844dda66b6737ed5f896f371cac45bbdbfb5"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.358788 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.360832 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" event={"ID":"14563680-8847-4136-9955-836dd8331930","Type":"ContainerStarted","Data":"89b099aabee69cd08f591a75d8554e02dde0cbc3c456aebaf4a9c7ac768faf2c"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.360972 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.363201 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" event={"ID":"28ded780-a2df-4624-807e-2426859b0a95","Type":"ContainerStarted","Data":"1d647740a78bab962a57b37aeca6296161657d6e48a9ac97efba860217009041"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.363302 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.369946 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"2904ce28-f3f4-41ad-8612-36e924ab3d32","Type":"ContainerStarted","Data":"35e81329211dfd7ba46b7d1a412c85e1895870998c69e86e9200e4fcc8bcbba4"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.370147 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.371373 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-snszk" podStartSLOduration=32.858803752 podStartE2EDuration="42.371352284s" podCreationTimestamp="2026-01-22 15:42:00 +0000 UTC" firstStartedPulling="2026-01-22 15:42:30.265149639 +0000 UTC m=+1097.026676549" lastFinishedPulling="2026-01-22 15:42:39.777698171 +0000 UTC m=+1106.539225081" observedRunningTime="2026-01-22 15:42:42.368302186 +0000 UTC m=+1109.129829096" watchObservedRunningTime="2026-01-22 15:42:42.371352284 +0000 UTC m=+1109.132879214" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.376745 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"57cba631-503b-4795-8463-3d1e50957d58","Type":"ContainerStarted","Data":"e50b7b4af29677765f0d53095b1991f88f2a42f9465ea121c0ae81c5ba542301"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.377085 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.384533 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d975cad-bc38-442f-acdd-0c8fa4f3b429","Type":"ContainerStarted","Data":"a6583622f4fbe8ee17d49e2b7993247f9ae3531f85d701196096761fc7648e49"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.395333 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6b65dd5d-6fe0-4cec-a8d4-d05b099607af","Type":"ContainerStarted","Data":"edb56192ba99b8b7f3bb222de91c9cb6268f93ecda4008ca6bc092d6f063fa5d"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.405592 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" event={"ID":"dba34c46-ef4e-4315-8b1d-1f1946e329a7","Type":"ContainerStarted","Data":"e3772cb4fe6217f533105b68b76a452e91c39d244f64fae8d7d1392b4fcc4c4a"} Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.405637 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.405893 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.406944 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" podStartSLOduration=26.48754332 podStartE2EDuration="35.406922041s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:30.97162466 +0000 UTC m=+1097.733151570" lastFinishedPulling="2026-01-22 15:42:39.891003381 +0000 UTC m=+1106.652530291" observedRunningTime="2026-01-22 15:42:42.396497093 +0000 UTC m=+1109.158024013" watchObservedRunningTime="2026-01-22 15:42:42.406922041 +0000 UTC m=+1109.168448951" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.424084 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-g56j8" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.432764 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=26.545159438 podStartE2EDuration="35.432743349s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.004383007 +0000 UTC m=+1097.765909927" lastFinishedPulling="2026-01-22 15:42:39.891966928 +0000 UTC m=+1106.653493838" observedRunningTime="2026-01-22 15:42:42.422921578 +0000 UTC m=+1109.184448498" watchObservedRunningTime="2026-01-22 15:42:42.432743349 +0000 UTC m=+1109.194270259" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.501084 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=38.359419314 podStartE2EDuration="48.501066833s" podCreationTimestamp="2026-01-22 15:41:54 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.142373113 +0000 UTC m=+1097.903900023" lastFinishedPulling="2026-01-22 15:42:41.284020612 +0000 UTC m=+1108.045547542" observedRunningTime="2026-01-22 15:42:42.495705519 +0000 UTC m=+1109.257232429" watchObservedRunningTime="2026-01-22 15:42:42.501066833 +0000 UTC m=+1109.262593743" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.552152 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" podStartSLOduration=23.903298107 podStartE2EDuration="35.552130763s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:28.127800144 +0000 UTC m=+1094.889327054" lastFinishedPulling="2026-01-22 15:42:39.7766328 +0000 UTC m=+1106.538159710" observedRunningTime="2026-01-22 15:42:42.523063642 +0000 UTC m=+1109.284590552" watchObservedRunningTime="2026-01-22 15:42:42.552130763 +0000 UTC m=+1109.313657673" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.553863 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=26.465128219 podStartE2EDuration="35.553849312s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.142587969 +0000 UTC m=+1097.904114879" lastFinishedPulling="2026-01-22 15:42:40.231309062 +0000 UTC m=+1106.992835972" observedRunningTime="2026-01-22 15:42:42.545330818 +0000 UTC m=+1109.306857738" watchObservedRunningTime="2026-01-22 15:42:42.553849312 +0000 UTC m=+1109.315376222" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.604967 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=26.521864721 podStartE2EDuration="35.604943983s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.147109108 +0000 UTC m=+1097.908636018" lastFinishedPulling="2026-01-22 15:42:40.23018837 +0000 UTC m=+1106.991715280" observedRunningTime="2026-01-22 15:42:42.596466561 +0000 UTC m=+1109.357993461" watchObservedRunningTime="2026-01-22 15:42:42.604943983 +0000 UTC m=+1109.366470883" Jan 22 15:42:42 crc kubenswrapper[4825]: I0122 15:42:42.625006 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" podStartSLOduration=26.381495028 podStartE2EDuration="35.624969366s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:30.972205567 +0000 UTC m=+1097.733732477" lastFinishedPulling="2026-01-22 15:42:40.215679905 +0000 UTC m=+1106.977206815" observedRunningTime="2026-01-22 15:42:42.620788506 +0000 UTC m=+1109.382315406" watchObservedRunningTime="2026-01-22 15:42:42.624969366 +0000 UTC m=+1109.386496276" Jan 22 15:42:43 crc kubenswrapper[4825]: I0122 15:42:43.414546 4825 generic.go:334] "Generic (PLEG): container finished" podID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerID="227a98847141f500e9dbffc940850dcbe53cfc134afb2a46bd5507fe627a1f17" exitCode=0 Jan 22 15:42:43 crc kubenswrapper[4825]: I0122 15:42:43.414787 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" event={"ID":"c3e73acb-b6ba-4385-aa50-f303b4aab4f2","Type":"ContainerDied","Data":"227a98847141f500e9dbffc940850dcbe53cfc134afb2a46bd5507fe627a1f17"} Jan 22 15:42:43 crc kubenswrapper[4825]: I0122 15:42:43.418094 4825 generic.go:334] "Generic (PLEG): container finished" podID="b08ffe2b-2e43-437b-beb1-2eb436baa4ec" containerID="d883705bce5abf3e5017ef79857990546208f1625e80d835265b7d92a16e6acf" exitCode=0 Jan 22 15:42:43 crc kubenswrapper[4825]: I0122 15:42:43.418220 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bkgcs" event={"ID":"b08ffe2b-2e43-437b-beb1-2eb436baa4ec","Type":"ContainerDied","Data":"d883705bce5abf3e5017ef79857990546208f1625e80d835265b7d92a16e6acf"} Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.435316 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bkgcs" event={"ID":"b08ffe2b-2e43-437b-beb1-2eb436baa4ec","Type":"ContainerStarted","Data":"af8b22f401660d14875a54f35bedd829ef1ebc558e1f3cad1f66d1915a6732ab"} Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.437187 4825 generic.go:334] "Generic (PLEG): container finished" podID="8ed16250-e013-4590-a99d-55576235c7d9" containerID="b7e8ea50d17dc63739334d1f801e466a472ce0d595f3d26cdceb47621861f86e" exitCode=0 Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.437246 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" event={"ID":"8ed16250-e013-4590-a99d-55576235c7d9","Type":"ContainerDied","Data":"b7e8ea50d17dc63739334d1f801e466a472ce0d595f3d26cdceb47621861f86e"} Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.440084 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" event={"ID":"c3e73acb-b6ba-4385-aa50-f303b4aab4f2","Type":"ContainerStarted","Data":"f409b76be5a408007250ff2f1f056770aab290adfa57cdd724627823f8a29151"} Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.440277 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.442284 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c562767d-1bda-4a9f-beec-5629395ca332","Type":"ContainerStarted","Data":"00290a6649824fbeeb226c5eccd66675c5f5ae8801084127c3bff60b1a439aa3"} Jan 22 15:42:44 crc kubenswrapper[4825]: I0122 15:42:44.555441 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" podStartSLOduration=3.876484814 podStartE2EDuration="56.555418424s" podCreationTimestamp="2026-01-22 15:41:48 +0000 UTC" firstStartedPulling="2026-01-22 15:41:49.412106718 +0000 UTC m=+1056.173633628" lastFinishedPulling="2026-01-22 15:42:42.091040328 +0000 UTC m=+1108.852567238" observedRunningTime="2026-01-22 15:42:44.519646061 +0000 UTC m=+1111.281172971" watchObservedRunningTime="2026-01-22 15:42:44.555418424 +0000 UTC m=+1111.316945334" Jan 22 15:42:45 crc kubenswrapper[4825]: I0122 15:42:45.460890 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bkgcs" event={"ID":"b08ffe2b-2e43-437b-beb1-2eb436baa4ec","Type":"ContainerStarted","Data":"542a955c717dc235e2aa651b5f214c616983a14fca9bd61ca0da070c4e285541"} Jan 22 15:42:45 crc kubenswrapper[4825]: I0122 15:42:45.463863 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" event={"ID":"8ed16250-e013-4590-a99d-55576235c7d9","Type":"ContainerStarted","Data":"bbacf0dcb7628cfd229bb4a5d96334ecd2b31115d165a1936ee4d7f8d6aba649"} Jan 22 15:42:46 crc kubenswrapper[4825]: I0122 15:42:46.490156 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"1f7994d5-5cc8-4830-bcd1-9f63b9109a09","Type":"ContainerStarted","Data":"6074c7ee55db5f0fce00d01794ed0287ac19a31909e574842c4206b9457fc625"} Jan 22 15:42:46 crc kubenswrapper[4825]: I0122 15:42:46.490443 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:42:46 crc kubenswrapper[4825]: I0122 15:42:46.490501 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:46 crc kubenswrapper[4825]: I0122 15:42:46.490862 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:42:46 crc kubenswrapper[4825]: I0122 15:42:46.595740 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" podStartSLOduration=-9223371978.259052 podStartE2EDuration="58.595723165s" podCreationTimestamp="2026-01-22 15:41:48 +0000 UTC" firstStartedPulling="2026-01-22 15:41:49.844517552 +0000 UTC m=+1056.606044462" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:42:46.588795477 +0000 UTC m=+1113.350322397" watchObservedRunningTime="2026-01-22 15:42:46.595723165 +0000 UTC m=+1113.357250075" Jan 22 15:42:46 crc kubenswrapper[4825]: I0122 15:42:46.620175 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bkgcs" podStartSLOduration=38.093804431 podStartE2EDuration="46.620151363s" podCreationTimestamp="2026-01-22 15:42:00 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.250605078 +0000 UTC m=+1098.012131988" lastFinishedPulling="2026-01-22 15:42:39.77695201 +0000 UTC m=+1106.538478920" observedRunningTime="2026-01-22 15:42:46.614666797 +0000 UTC m=+1113.376193727" watchObservedRunningTime="2026-01-22 15:42:46.620151363 +0000 UTC m=+1113.381678273" Jan 22 15:42:49 crc kubenswrapper[4825]: I0122 15:42:49.531713 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerStarted","Data":"cdef4fcb8164826595dbc00ba1067188d2efc3d80919f2af6b91f912c22490c1"} Jan 22 15:42:50 crc kubenswrapper[4825]: I0122 15:42:50.539175 4825 generic.go:334] "Generic (PLEG): container finished" podID="6b65dd5d-6fe0-4cec-a8d4-d05b099607af" containerID="edb56192ba99b8b7f3bb222de91c9cb6268f93ecda4008ca6bc092d6f063fa5d" exitCode=0 Jan 22 15:42:50 crc kubenswrapper[4825]: I0122 15:42:50.539290 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6b65dd5d-6fe0-4cec-a8d4-d05b099607af","Type":"ContainerDied","Data":"edb56192ba99b8b7f3bb222de91c9cb6268f93ecda4008ca6bc092d6f063fa5d"} Jan 22 15:42:52 crc kubenswrapper[4825]: I0122 15:42:52.562860 4825 generic.go:334] "Generic (PLEG): container finished" podID="1f7994d5-5cc8-4830-bcd1-9f63b9109a09" containerID="6074c7ee55db5f0fce00d01794ed0287ac19a31909e574842c4206b9457fc625" exitCode=0 Jan 22 15:42:52 crc kubenswrapper[4825]: I0122 15:42:52.563031 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"1f7994d5-5cc8-4830-bcd1-9f63b9109a09","Type":"ContainerDied","Data":"6074c7ee55db5f0fce00d01794ed0287ac19a31909e574842c4206b9457fc625"} Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.545225 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.575380 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6b65dd5d-6fe0-4cec-a8d4-d05b099607af","Type":"ContainerStarted","Data":"41360ef902519b03cacedb91f083807e48a02fb769258c2ac77d585c89dc2aa2"} Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.578544 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b85d0578-2876-4355-b5f7-7412f59eb278","Type":"ContainerStarted","Data":"f46c1b33bc7e9eff6bbf1f8eed8dc7f8736dbbf5e73e80303e24737d3f84042e"} Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.579571 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.581843 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"892f29f1-29c4-4f1d-83af-660bf2983766","Type":"ContainerStarted","Data":"eaa6d2c94cdfcaacdf90f78920a452ae5792612c9598e67281362a5c23e1c60e"} Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.583836 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4d975cad-bc38-442f-acdd-0c8fa4f3b429","Type":"ContainerStarted","Data":"8bf7b4d4dd0418d159a331ccded32448ad41a3a3e24badddc080cc5ffeea6522"} Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.586773 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" event={"ID":"c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc","Type":"ContainerStarted","Data":"19c83893a8a8d3cfe78ba5715e46d4dc5a62069a78c6ac2ff2f4effe32410505"} Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.586969 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.621370 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.623056 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.850182248 podStartE2EDuration="1m4.623036084s" podCreationTimestamp="2026-01-22 15:41:49 +0000 UTC" firstStartedPulling="2026-01-22 15:41:52.119673628 +0000 UTC m=+1058.881200538" lastFinishedPulling="2026-01-22 15:42:39.892527474 +0000 UTC m=+1106.654054374" observedRunningTime="2026-01-22 15:42:53.620213683 +0000 UTC m=+1120.381740603" watchObservedRunningTime="2026-01-22 15:42:53.623036084 +0000 UTC m=+1120.384562994" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.657589 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm" podStartSLOduration=25.386887108 podStartE2EDuration="46.657564711s" podCreationTimestamp="2026-01-22 15:42:07 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.142524827 +0000 UTC m=+1097.904051737" lastFinishedPulling="2026-01-22 15:42:52.41320243 +0000 UTC m=+1119.174729340" observedRunningTime="2026-01-22 15:42:53.652193988 +0000 UTC m=+1120.413720898" watchObservedRunningTime="2026-01-22 15:42:53.657564711 +0000 UTC m=+1120.419091631" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.688334 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.296555349 podStartE2EDuration="1m1.6882898s" podCreationTimestamp="2026-01-22 15:41:52 +0000 UTC" firstStartedPulling="2026-01-22 15:41:54.021894281 +0000 UTC m=+1060.783421191" lastFinishedPulling="2026-01-22 15:42:52.413628722 +0000 UTC m=+1119.175155642" observedRunningTime="2026-01-22 15:42:53.67781568 +0000 UTC m=+1120.439342590" watchObservedRunningTime="2026-01-22 15:42:53.6882898 +0000 UTC m=+1120.449816710" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.693466 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.704198 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=35.406652454 podStartE2EDuration="56.704177164s" podCreationTimestamp="2026-01-22 15:41:57 +0000 UTC" firstStartedPulling="2026-01-22 15:42:31.142222399 +0000 UTC m=+1097.903749299" lastFinishedPulling="2026-01-22 15:42:52.439747099 +0000 UTC m=+1119.201274009" observedRunningTime="2026-01-22 15:42:53.702799895 +0000 UTC m=+1120.464326815" watchObservedRunningTime="2026-01-22 15:42:53.704177164 +0000 UTC m=+1120.465704074" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.737737 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.958636914 podStartE2EDuration="52.737707543s" podCreationTimestamp="2026-01-22 15:42:01 +0000 UTC" firstStartedPulling="2026-01-22 15:42:30.638177526 +0000 UTC m=+1097.399704436" lastFinishedPulling="2026-01-22 15:42:52.417248155 +0000 UTC m=+1119.178775065" observedRunningTime="2026-01-22 15:42:53.728774958 +0000 UTC m=+1120.490301868" watchObservedRunningTime="2026-01-22 15:42:53.737707543 +0000 UTC m=+1120.499234453" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.884106 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.927776 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:42:53 crc kubenswrapper[4825]: I0122 15:42:53.933659 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.026672 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5p8q6"] Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.032509 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerName="dnsmasq-dns" containerID="cri-o://f409b76be5a408007250ff2f1f056770aab290adfa57cdd724627823f8a29151" gracePeriod=10 Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.600456 4825 generic.go:334] "Generic (PLEG): container finished" podID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerID="f409b76be5a408007250ff2f1f056770aab290adfa57cdd724627823f8a29151" exitCode=0 Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.600606 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" event={"ID":"c3e73acb-b6ba-4385-aa50-f303b4aab4f2","Type":"ContainerDied","Data":"f409b76be5a408007250ff2f1f056770aab290adfa57cdd724627823f8a29151"} Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.602597 4825 generic.go:334] "Generic (PLEG): container finished" podID="c562767d-1bda-4a9f-beec-5629395ca332" containerID="00290a6649824fbeeb226c5eccd66675c5f5ae8801084127c3bff60b1a439aa3" exitCode=0 Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.602757 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c562767d-1bda-4a9f-beec-5629395ca332","Type":"ContainerDied","Data":"00290a6649824fbeeb226c5eccd66675c5f5ae8801084127c3bff60b1a439aa3"} Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.603285 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.675481 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.693317 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:54 crc kubenswrapper[4825]: I0122 15:42:54.808604 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.147400 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hs7xf"] Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.152155 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.158268 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.191895 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hs7xf"] Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.251752 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.271656 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktr9\" (UniqueName: \"kubernetes.io/projected/3c743092-fab2-48fa-8ff8-222d8443bd21-kube-api-access-tktr9\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.271720 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.271790 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-config\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.271816 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.292453 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.372816 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-config\") pod \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.378174 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvd7p\" (UniqueName: \"kubernetes.io/projected/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-kube-api-access-nvd7p\") pod \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.378235 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-dns-svc\") pod \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\" (UID: \"c3e73acb-b6ba-4385-aa50-f303b4aab4f2\") " Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.378537 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktr9\" (UniqueName: \"kubernetes.io/projected/3c743092-fab2-48fa-8ff8-222d8443bd21-kube-api-access-tktr9\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.378625 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.378772 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-config\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.378804 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.386131 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.387407 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-config\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.388073 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.393330 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-kube-api-access-nvd7p" (OuterVolumeSpecName: "kube-api-access-nvd7p") pod "c3e73acb-b6ba-4385-aa50-f303b4aab4f2" (UID: "c3e73acb-b6ba-4385-aa50-f303b4aab4f2"). InnerVolumeSpecName "kube-api-access-nvd7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.442567 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktr9\" (UniqueName: \"kubernetes.io/projected/3c743092-fab2-48fa-8ff8-222d8443bd21-kube-api-access-tktr9\") pod \"dnsmasq-dns-5bf47b49b7-hs7xf\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.480319 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvd7p\" (UniqueName: \"kubernetes.io/projected/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-kube-api-access-nvd7p\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.486741 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.503108 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mcnvt"] Jan 22 15:42:55 crc kubenswrapper[4825]: E0122 15:42:55.503520 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerName="init" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.503532 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerName="init" Jan 22 15:42:55 crc kubenswrapper[4825]: E0122 15:42:55.503546 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerName="dnsmasq-dns" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.503552 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerName="dnsmasq-dns" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.503740 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" containerName="dnsmasq-dns" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.504453 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.512521 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.586785 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-combined-ca-bundle\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.586836 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5vp\" (UniqueName: \"kubernetes.io/projected/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-kube-api-access-kz5vp\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.586901 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-config\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.586927 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-ovs-rundir\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.586970 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-ovn-rundir\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.592340 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-config" (OuterVolumeSpecName: "config") pod "c3e73acb-b6ba-4385-aa50-f303b4aab4f2" (UID: "c3e73acb-b6ba-4385-aa50-f303b4aab4f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.592361 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.592632 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.646925 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3e73acb-b6ba-4385-aa50-f303b4aab4f2" (UID: "c3e73acb-b6ba-4385-aa50-f303b4aab4f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737320 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737398 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-combined-ca-bundle\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737421 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5vp\" (UniqueName: \"kubernetes.io/projected/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-kube-api-access-kz5vp\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737475 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-config\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737497 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-ovs-rundir\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737529 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-ovn-rundir\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737617 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3e73acb-b6ba-4385-aa50-f303b4aab4f2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.737844 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-ovn-rundir\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.742600 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mcnvt"] Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.744197 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-config\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.744273 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-ovs-rundir\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.767710 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-combined-ca-bundle\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.769700 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.780334 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" event={"ID":"c3e73acb-b6ba-4385-aa50-f303b4aab4f2","Type":"ContainerDied","Data":"0bf0bcbc47ac12c56e3919a35214922341e96e8c57db6f110ad1b4d3ff66a135"} Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.780399 4825 scope.go:117] "RemoveContainer" containerID="f409b76be5a408007250ff2f1f056770aab290adfa57cdd724627823f8a29151" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.780555 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5p8q6" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.796939 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5vp\" (UniqueName: \"kubernetes.io/projected/b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7-kube-api-access-kz5vp\") pod \"ovn-controller-metrics-mcnvt\" (UID: \"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7\") " pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.836390 4825 generic.go:334] "Generic (PLEG): container finished" podID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerID="cdef4fcb8164826595dbc00ba1067188d2efc3d80919f2af6b91f912c22490c1" exitCode=0 Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.836514 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerDied","Data":"cdef4fcb8164826595dbc00ba1067188d2efc3d80919f2af6b91f912c22490c1"} Jan 22 15:42:55 crc kubenswrapper[4825]: I0122 15:42:55.979455 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mcnvt" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.014325 4825 scope.go:117] "RemoveContainer" containerID="227a98847141f500e9dbffc940850dcbe53cfc134afb2a46bd5507fe627a1f17" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.027413 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c562767d-1bda-4a9f-beec-5629395ca332","Type":"ContainerStarted","Data":"1d0d35f9cf5ac7c19ca1b99644021546b455c1af26139f1305112787616b0f27"} Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.118375 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hs7xf"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.142599 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.145381 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5p8q6"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.178166 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5p8q6"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.232123 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371970.622671 podStartE2EDuration="1m6.232104717s" podCreationTimestamp="2026-01-22 15:41:50 +0000 UTC" firstStartedPulling="2026-01-22 15:41:54.229261019 +0000 UTC m=+1060.990787939" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:42:56.107659039 +0000 UTC m=+1122.869185949" watchObservedRunningTime="2026-01-22 15:42:56.232104717 +0000 UTC m=+1122.993631627" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.353218 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-sz7gc"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.354700 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.362314 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.369801 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sz7gc"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.457103 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-dns-svc\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.457181 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.457205 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls5cv\" (UniqueName: \"kubernetes.io/projected/63bacc05-479b-49fb-bc82-7ca655524842-kube-api-access-ls5cv\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.457263 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-config\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.457510 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.472074 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hs7xf"] Jan 22 15:42:56 crc kubenswrapper[4825]: W0122 15:42:56.502834 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c743092_fab2_48fa_8ff8_222d8443bd21.slice/crio-7ac3fa02ab93a50a30b58e869891277072c6666dd32744d4d78a5da6ab544a08 WatchSource:0}: Error finding container 7ac3fa02ab93a50a30b58e869891277072c6666dd32744d4d78a5da6ab544a08: Status 404 returned error can't find the container with id 7ac3fa02ab93a50a30b58e869891277072c6666dd32744d4d78a5da6ab544a08 Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.574244 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-config\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.574405 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.574479 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-dns-svc\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.574558 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls5cv\" (UniqueName: \"kubernetes.io/projected/63bacc05-479b-49fb-bc82-7ca655524842-kube-api-access-ls5cv\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.574586 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.575765 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.576662 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-config\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.577513 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.597348 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-dns-svc\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.649734 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls5cv\" (UniqueName: \"kubernetes.io/projected/63bacc05-479b-49fb-bc82-7ca655524842-kube-api-access-ls5cv\") pod \"dnsmasq-dns-8554648995-sz7gc\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.708195 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.719049 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.726001 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.737629 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.738103 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.738405 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5h2mj" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.738596 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.738721 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781123 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781212 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781613 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781654 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-scripts\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781688 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkwf\" (UniqueName: \"kubernetes.io/projected/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-kube-api-access-lpkwf\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781760 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-config\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.781778 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883271 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883669 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883720 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883799 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-scripts\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883845 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpkwf\" (UniqueName: \"kubernetes.io/projected/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-kube-api-access-lpkwf\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883947 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-config\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.883972 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.889391 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.889686 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.891246 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.893598 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-scripts\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.893667 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-config\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.895521 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:56 crc kubenswrapper[4825]: I0122 15:42:56.914101 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpkwf\" (UniqueName: \"kubernetes.io/projected/6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9-kube-api-access-lpkwf\") pod \"ovn-northd-0\" (UID: \"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9\") " pod="openstack/ovn-northd-0" Jan 22 15:42:57 crc kubenswrapper[4825]: I0122 15:42:57.051289 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" event={"ID":"3c743092-fab2-48fa-8ff8-222d8443bd21","Type":"ContainerStarted","Data":"7ac3fa02ab93a50a30b58e869891277072c6666dd32744d4d78a5da6ab544a08"} Jan 22 15:42:57 crc kubenswrapper[4825]: I0122 15:42:57.095113 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 15:42:57 crc kubenswrapper[4825]: I0122 15:42:57.315140 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mcnvt"] Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:57.616954 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e73acb-b6ba-4385-aa50-f303b4aab4f2" path="/var/lib/kubelet/pods/c3e73acb-b6ba-4385-aa50-f303b4aab4f2/volumes" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:57.617527 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sz7gc"] Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:57.683152 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-j9kkt" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:57.706794 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:57.926445 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-8vxr8" Jan 22 15:42:59 crc kubenswrapper[4825]: W0122 15:42:57.987415 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63bacc05_479b_49fb_bc82_7ca655524842.slice/crio-9e264d2e6ceaf600924463735a37747f55ca4fc90846fa9affcc0b08b801d9ee WatchSource:0}: Error finding container 9e264d2e6ceaf600924463735a37747f55ca4fc90846fa9affcc0b08b801d9ee: Status 404 returned error can't find the container with id 9e264d2e6ceaf600924463735a37747f55ca4fc90846fa9affcc0b08b801d9ee Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:57.999804 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-r4krs" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.104470 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.135782 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mcnvt" event={"ID":"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7","Type":"ContainerStarted","Data":"371b71df1da15495e451410275c1ad353cd733f41d710a69f57959472d08da62"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.137656 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9","Type":"ContainerStarted","Data":"5c67e7cb7a27b7cbc0160192f7e7e35dd28ca26aea7bcb0c036ef6728b082ac7"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.138810 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sz7gc" event={"ID":"63bacc05-479b-49fb-bc82-7ca655524842","Type":"ContainerStarted","Data":"9e264d2e6ceaf600924463735a37747f55ca4fc90846fa9affcc0b08b801d9ee"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.142139 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" podUID="3c743092-fab2-48fa-8ff8-222d8443bd21" containerName="init" containerID="cri-o://8124adca69a0d4baaadb82a7fcd9347252996fa691e9a979ccf7a38c78b9feab" gracePeriod=10 Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.142452 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" event={"ID":"3c743092-fab2-48fa-8ff8-222d8443bd21","Type":"ContainerStarted","Data":"8124adca69a0d4baaadb82a7fcd9347252996fa691e9a979ccf7a38c78b9feab"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.900286 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="57cba631-503b-4795-8463-3d1e50957d58" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:58.989400 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.183899 4825 generic.go:334] "Generic (PLEG): container finished" podID="63bacc05-479b-49fb-bc82-7ca655524842" containerID="70ed6d4cfa15bfebff09f2f59348d6d8dd1b0f5db11d2ffedada79e265b7aa86" exitCode=0 Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.184024 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sz7gc" event={"ID":"63bacc05-479b-49fb-bc82-7ca655524842","Type":"ContainerDied","Data":"70ed6d4cfa15bfebff09f2f59348d6d8dd1b0f5db11d2ffedada79e265b7aa86"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.187745 4825 generic.go:334] "Generic (PLEG): container finished" podID="3c743092-fab2-48fa-8ff8-222d8443bd21" containerID="8124adca69a0d4baaadb82a7fcd9347252996fa691e9a979ccf7a38c78b9feab" exitCode=0 Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.187827 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" event={"ID":"3c743092-fab2-48fa-8ff8-222d8443bd21","Type":"ContainerDied","Data":"8124adca69a0d4baaadb82a7fcd9347252996fa691e9a979ccf7a38c78b9feab"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.190141 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mcnvt" event={"ID":"b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7","Type":"ContainerStarted","Data":"ec180b8c1a2af38185a46ac202bcdc68d80cfaa732f6d4d1f8e4a4015f8d6e45"} Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.231337 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mcnvt" podStartSLOduration=4.231314887 podStartE2EDuration="4.231314887s" podCreationTimestamp="2026-01-22 15:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:42:59.222199316 +0000 UTC m=+1125.983726226" watchObservedRunningTime="2026-01-22 15:42:59.231314887 +0000 UTC m=+1125.992841797" Jan 22 15:42:59 crc kubenswrapper[4825]: I0122 15:42:59.237126 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.554839 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.641238 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-dns-svc\") pod \"3c743092-fab2-48fa-8ff8-222d8443bd21\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.641583 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-config\") pod \"3c743092-fab2-48fa-8ff8-222d8443bd21\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.641692 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktr9\" (UniqueName: \"kubernetes.io/projected/3c743092-fab2-48fa-8ff8-222d8443bd21-kube-api-access-tktr9\") pod \"3c743092-fab2-48fa-8ff8-222d8443bd21\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.642389 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-ovsdbserver-nb\") pod \"3c743092-fab2-48fa-8ff8-222d8443bd21\" (UID: \"3c743092-fab2-48fa-8ff8-222d8443bd21\") " Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.648781 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c743092-fab2-48fa-8ff8-222d8443bd21-kube-api-access-tktr9" (OuterVolumeSpecName: "kube-api-access-tktr9") pod "3c743092-fab2-48fa-8ff8-222d8443bd21" (UID: "3c743092-fab2-48fa-8ff8-222d8443bd21"). InnerVolumeSpecName "kube-api-access-tktr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.676587 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c743092-fab2-48fa-8ff8-222d8443bd21" (UID: "3c743092-fab2-48fa-8ff8-222d8443bd21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.683609 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c743092-fab2-48fa-8ff8-222d8443bd21" (UID: "3c743092-fab2-48fa-8ff8-222d8443bd21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.702281 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-config" (OuterVolumeSpecName: "config") pod "3c743092-fab2-48fa-8ff8-222d8443bd21" (UID: "3c743092-fab2-48fa-8ff8-222d8443bd21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.785756 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.785782 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktr9\" (UniqueName: \"kubernetes.io/projected/3c743092-fab2-48fa-8ff8-222d8443bd21-kube-api-access-tktr9\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.785849 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:00 crc kubenswrapper[4825]: I0122 15:43:00.785860 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c743092-fab2-48fa-8ff8-222d8443bd21-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.225277 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sz7gc" event={"ID":"63bacc05-479b-49fb-bc82-7ca655524842","Type":"ContainerStarted","Data":"d6ec51704f098a43ef8bed3b68a5e1218851817643fb77be6fc903347ae82b9a"} Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.225673 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.230717 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.231673 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-hs7xf" event={"ID":"3c743092-fab2-48fa-8ff8-222d8443bd21","Type":"ContainerDied","Data":"7ac3fa02ab93a50a30b58e869891277072c6666dd32744d4d78a5da6ab544a08"} Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.231719 4825 scope.go:117] "RemoveContainer" containerID="8124adca69a0d4baaadb82a7fcd9347252996fa691e9a979ccf7a38c78b9feab" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.234765 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"1f7994d5-5cc8-4830-bcd1-9f63b9109a09","Type":"ContainerStarted","Data":"c5c2bfdc1fbcd2e0ab7822c662740609021ac93627c7dbde80e7542935136575"} Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.252936 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-sz7gc" podStartSLOduration=5.252905543 podStartE2EDuration="5.252905543s" podCreationTimestamp="2026-01-22 15:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:01.24932583 +0000 UTC m=+1128.010852740" watchObservedRunningTime="2026-01-22 15:43:01.252905543 +0000 UTC m=+1128.014432453" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.296822 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hs7xf"] Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.308567 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-hs7xf"] Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.358391 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.358439 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.475569 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 15:43:01 crc kubenswrapper[4825]: I0122 15:43:01.530542 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c743092-fab2-48fa-8ff8-222d8443bd21" path="/var/lib/kubelet/pods/3c743092-fab2-48fa-8ff8-222d8443bd21/volumes" Jan 22 15:43:02 crc kubenswrapper[4825]: I0122 15:43:02.334759 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9","Type":"ContainerStarted","Data":"db61fb6fabb8bd2447d0138e4586f5c58a3d0d5c739ecdeb65a6bddc304d34c2"} Jan 22 15:43:02 crc kubenswrapper[4825]: I0122 15:43:02.634769 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 15:43:02 crc kubenswrapper[4825]: I0122 15:43:02.906818 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 15:43:02 crc kubenswrapper[4825]: I0122 15:43:02.907171 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.359728 4825 generic.go:334] "Generic (PLEG): container finished" podID="215992ea-1abc-44d0-925b-799eb87bcc09" containerID="184e43136592bf3469b06dc128b988a48972055ca89cc79136bb1b491d6c7e34" exitCode=0 Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.359809 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"215992ea-1abc-44d0-925b-799eb87bcc09","Type":"ContainerDied","Data":"184e43136592bf3469b06dc128b988a48972055ca89cc79136bb1b491d6c7e34"} Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.367093 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9","Type":"ContainerStarted","Data":"2c665f5b1e3c4d30071004e773700d8606e962cff434cd19d45ae15c0c559297"} Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.368017 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.374082 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.517343 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-97nzk"] Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.518175 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.661162388 podStartE2EDuration="7.518164115s" podCreationTimestamp="2026-01-22 15:42:56 +0000 UTC" firstStartedPulling="2026-01-22 15:42:57.990311382 +0000 UTC m=+1124.751838292" lastFinishedPulling="2026-01-22 15:43:01.847313109 +0000 UTC m=+1128.608840019" observedRunningTime="2026-01-22 15:43:03.443233842 +0000 UTC m=+1130.204760752" watchObservedRunningTime="2026-01-22 15:43:03.518164115 +0000 UTC m=+1130.279691025" Jan 22 15:43:03 crc kubenswrapper[4825]: E0122 15:43:03.519314 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c743092-fab2-48fa-8ff8-222d8443bd21" containerName="init" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.519355 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c743092-fab2-48fa-8ff8-222d8443bd21" containerName="init" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.519655 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c743092-fab2-48fa-8ff8-222d8443bd21" containerName="init" Jan 22 15:43:03 crc kubenswrapper[4825]: E0122 15:43:03.520000 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod215992ea_1abc_44d0_925b_799eb87bcc09.slice/crio-conmon-184e43136592bf3469b06dc128b988a48972055ca89cc79136bb1b491d6c7e34.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod215992ea_1abc_44d0_925b_799eb87bcc09.slice/crio-184e43136592bf3469b06dc128b988a48972055ca89cc79136bb1b491d6c7e34.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.521199 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97nzk" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.701632 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xl9\" (UniqueName: \"kubernetes.io/projected/3b017023-1c9c-4ff7-9f21-8370aa38cc26-kube-api-access-v5xl9\") pod \"placement-db-create-97nzk\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " pod="openstack/placement-db-create-97nzk" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.701703 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b017023-1c9c-4ff7-9f21-8370aa38cc26-operator-scripts\") pod \"placement-db-create-97nzk\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " pod="openstack/placement-db-create-97nzk" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.748968 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.749104 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-97nzk"] Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.764371 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8e3f-account-create-update-4b6j6"] Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.774164 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8e3f-account-create-update-4b6j6"] Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.774288 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.776900 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.802815 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xl9\" (UniqueName: \"kubernetes.io/projected/3b017023-1c9c-4ff7-9f21-8370aa38cc26-kube-api-access-v5xl9\") pod \"placement-db-create-97nzk\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " pod="openstack/placement-db-create-97nzk" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.803089 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b017023-1c9c-4ff7-9f21-8370aa38cc26-operator-scripts\") pod \"placement-db-create-97nzk\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " pod="openstack/placement-db-create-97nzk" Jan 22 15:43:03 crc kubenswrapper[4825]: I0122 15:43:03.804938 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b017023-1c9c-4ff7-9f21-8370aa38cc26-operator-scripts\") pod \"placement-db-create-97nzk\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " pod="openstack/placement-db-create-97nzk" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.021142 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xl9\" (UniqueName: \"kubernetes.io/projected/3b017023-1c9c-4ff7-9f21-8370aa38cc26-kube-api-access-v5xl9\") pod \"placement-db-create-97nzk\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " pod="openstack/placement-db-create-97nzk" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.027219 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wsd5\" (UniqueName: \"kubernetes.io/projected/37522712-d1ed-4a4d-ae99-c8fa95502dc1-kube-api-access-7wsd5\") pod \"placement-8e3f-account-create-update-4b6j6\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.027453 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37522712-d1ed-4a4d-ae99-c8fa95502dc1-operator-scripts\") pod \"placement-8e3f-account-create-update-4b6j6\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.074123 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97nzk" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.128837 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wsd5\" (UniqueName: \"kubernetes.io/projected/37522712-d1ed-4a4d-ae99-c8fa95502dc1-kube-api-access-7wsd5\") pod \"placement-8e3f-account-create-update-4b6j6\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.128928 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37522712-d1ed-4a4d-ae99-c8fa95502dc1-operator-scripts\") pod \"placement-8e3f-account-create-update-4b6j6\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.130224 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37522712-d1ed-4a4d-ae99-c8fa95502dc1-operator-scripts\") pod \"placement-8e3f-account-create-update-4b6j6\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.147292 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wsd5\" (UniqueName: \"kubernetes.io/projected/37522712-d1ed-4a4d-ae99-c8fa95502dc1-kube-api-access-7wsd5\") pod \"placement-8e3f-account-create-update-4b6j6\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.383592 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"215992ea-1abc-44d0-925b-799eb87bcc09","Type":"ContainerStarted","Data":"fe630163da9699c6ae7767c15986a0a522ba8248bdbb3d16653256e23ac471e7"} Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.384161 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.390010 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"1f7994d5-5cc8-4830-bcd1-9f63b9109a09","Type":"ContainerStarted","Data":"995da93f2341959537e2184cb0334b9948d3f772ce070b71a1f4034a3f980a9d"} Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.390725 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.401811 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.408158 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.459474 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.12343387 podStartE2EDuration="1m16.459456291s" podCreationTimestamp="2026-01-22 15:41:48 +0000 UTC" firstStartedPulling="2026-01-22 15:41:51.015638069 +0000 UTC m=+1057.777164979" lastFinishedPulling="2026-01-22 15:42:29.35166049 +0000 UTC m=+1096.113187400" observedRunningTime="2026-01-22 15:43:04.427473576 +0000 UTC m=+1131.189000486" watchObservedRunningTime="2026-01-22 15:43:04.459456291 +0000 UTC m=+1131.220983201" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.477542 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=39.329416072 podStartE2EDuration="1m9.477524407s" podCreationTimestamp="2026-01-22 15:41:55 +0000 UTC" firstStartedPulling="2026-01-22 15:42:30.30960238 +0000 UTC m=+1097.071129290" lastFinishedPulling="2026-01-22 15:43:00.457710715 +0000 UTC m=+1127.219237625" observedRunningTime="2026-01-22 15:43:04.47378783 +0000 UTC m=+1131.235314750" watchObservedRunningTime="2026-01-22 15:43:04.477524407 +0000 UTC m=+1131.239051317" Jan 22 15:43:04 crc kubenswrapper[4825]: I0122 15:43:04.759878 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-97nzk"] Jan 22 15:43:05 crc kubenswrapper[4825]: I0122 15:43:05.436512 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97nzk" event={"ID":"3b017023-1c9c-4ff7-9f21-8370aa38cc26","Type":"ContainerStarted","Data":"272c288ea5d43008249180e4bbe003aa0f55786481255c3b6e7cdcf335f90a07"} Jan 22 15:43:05 crc kubenswrapper[4825]: I0122 15:43:05.651352 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:43:05 crc kubenswrapper[4825]: I0122 15:43:05.651422 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:43:05 crc kubenswrapper[4825]: I0122 15:43:05.731273 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8e3f-account-create-update-4b6j6"] Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.029481 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sz7gc"] Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.029750 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-sz7gc" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="dnsmasq-dns" containerID="cri-o://d6ec51704f098a43ef8bed3b68a5e1218851817643fb77be6fc903347ae82b9a" gracePeriod=10 Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.050881 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.242872 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bkbj9"] Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.245562 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.290347 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.290603 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-config\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.290637 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.290678 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdf8j\" (UniqueName: \"kubernetes.io/projected/3d53d147-8362-48e9-b525-44249e49ae01-kube-api-access-bdf8j\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.290740 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.356962 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bkbj9"] Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.555684 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.556113 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-config\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.556156 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.556202 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdf8j\" (UniqueName: \"kubernetes.io/projected/3d53d147-8362-48e9-b525-44249e49ae01-kube-api-access-bdf8j\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.556254 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.557635 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.558190 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.558839 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-config\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.558881 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.580915 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97nzk" event={"ID":"3b017023-1c9c-4ff7-9f21-8370aa38cc26","Type":"ContainerStarted","Data":"1934faf63fe73508d347b397d946ee591db6489ba2cf4030a4a659275bec7d37"} Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.594342 4825 generic.go:334] "Generic (PLEG): container finished" podID="63bacc05-479b-49fb-bc82-7ca655524842" containerID="d6ec51704f098a43ef8bed3b68a5e1218851817643fb77be6fc903347ae82b9a" exitCode=0 Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.594433 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sz7gc" event={"ID":"63bacc05-479b-49fb-bc82-7ca655524842","Type":"ContainerDied","Data":"d6ec51704f098a43ef8bed3b68a5e1218851817643fb77be6fc903347ae82b9a"} Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.601356 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e3f-account-create-update-4b6j6" event={"ID":"37522712-d1ed-4a4d-ae99-c8fa95502dc1","Type":"ContainerStarted","Data":"ddb5bfd6e16eed5386825fba146882a12535518b5a8e1afade6c048203ae57b1"} Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.628176 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdf8j\" (UniqueName: \"kubernetes.io/projected/3d53d147-8362-48e9-b525-44249e49ae01-kube-api-access-bdf8j\") pod \"dnsmasq-dns-b8fbc5445-bkbj9\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.637582 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-97nzk" podStartSLOduration=3.6375575209999997 podStartE2EDuration="3.637557521s" podCreationTimestamp="2026-01-22 15:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:06.636354076 +0000 UTC m=+1133.397880986" watchObservedRunningTime="2026-01-22 15:43:06.637557521 +0000 UTC m=+1133.399084431" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.710363 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-sz7gc" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Jan 22 15:43:06 crc kubenswrapper[4825]: I0122 15:43:06.888930 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.590470 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.605742 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.611669 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.612191 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9wm9f" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.612379 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.612709 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.623453 4825 generic.go:334] "Generic (PLEG): container finished" podID="3b017023-1c9c-4ff7-9f21-8370aa38cc26" containerID="1934faf63fe73508d347b397d946ee591db6489ba2cf4030a4a659275bec7d37" exitCode=0 Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.623532 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97nzk" event={"ID":"3b017023-1c9c-4ff7-9f21-8370aa38cc26","Type":"ContainerDied","Data":"1934faf63fe73508d347b397d946ee591db6489ba2cf4030a4a659275bec7d37"} Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.626097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sz7gc" event={"ID":"63bacc05-479b-49fb-bc82-7ca655524842","Type":"ContainerDied","Data":"9e264d2e6ceaf600924463735a37747f55ca4fc90846fa9affcc0b08b801d9ee"} Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.626137 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e264d2e6ceaf600924463735a37747f55ca4fc90846fa9affcc0b08b801d9ee" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.627604 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e3f-account-create-update-4b6j6" event={"ID":"37522712-d1ed-4a4d-ae99-c8fa95502dc1","Type":"ContainerStarted","Data":"2444970279a7d3f2048b11a29bbf6a1a801c6bcf6a16405f6fac3b7d2e96ce41"} Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.630503 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.660498 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.693788 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8e3f-account-create-update-4b6j6" podStartSLOduration=4.693709431 podStartE2EDuration="4.693709431s" podCreationTimestamp="2026-01-22 15:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:07.674036458 +0000 UTC m=+1134.435563368" watchObservedRunningTime="2026-01-22 15:43:07.693709431 +0000 UTC m=+1134.455236341" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.790625 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-sb\") pod \"63bacc05-479b-49fb-bc82-7ca655524842\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.790735 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-dns-svc\") pod \"63bacc05-479b-49fb-bc82-7ca655524842\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.790766 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-nb\") pod \"63bacc05-479b-49fb-bc82-7ca655524842\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.790821 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls5cv\" (UniqueName: \"kubernetes.io/projected/63bacc05-479b-49fb-bc82-7ca655524842-kube-api-access-ls5cv\") pod \"63bacc05-479b-49fb-bc82-7ca655524842\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.791023 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-config\") pod \"63bacc05-479b-49fb-bc82-7ca655524842\" (UID: \"63bacc05-479b-49fb-bc82-7ca655524842\") " Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.791170 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bkbj9"] Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.792228 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/62f00afd-c39a-409f-ba5e-b5474959717b-lock\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.792603 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8864222f-1851-4967-a203-84babe8f5dc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8864222f-1851-4967-a203-84babe8f5dc7\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.792800 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/62f00afd-c39a-409f-ba5e-b5474959717b-cache\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.792895 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9m2\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-kube-api-access-gq9m2\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.792952 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.808864 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bacc05-479b-49fb-bc82-7ca655524842-kube-api-access-ls5cv" (OuterVolumeSpecName: "kube-api-access-ls5cv") pod "63bacc05-479b-49fb-bc82-7ca655524842" (UID: "63bacc05-479b-49fb-bc82-7ca655524842"). InnerVolumeSpecName "kube-api-access-ls5cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.824585 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f00afd-c39a-409f-ba5e-b5474959717b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.825374 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls5cv\" (UniqueName: \"kubernetes.io/projected/63bacc05-479b-49fb-bc82-7ca655524842-kube-api-access-ls5cv\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.863674 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63bacc05-479b-49fb-bc82-7ca655524842" (UID: "63bacc05-479b-49fb-bc82-7ca655524842"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.866624 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-config" (OuterVolumeSpecName: "config") pod "63bacc05-479b-49fb-bc82-7ca655524842" (UID: "63bacc05-479b-49fb-bc82-7ca655524842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.888901 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63bacc05-479b-49fb-bc82-7ca655524842" (UID: "63bacc05-479b-49fb-bc82-7ca655524842"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.891264 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63bacc05-479b-49fb-bc82-7ca655524842" (UID: "63bacc05-479b-49fb-bc82-7ca655524842"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927112 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/62f00afd-c39a-409f-ba5e-b5474959717b-cache\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927183 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9m2\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-kube-api-access-gq9m2\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927217 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927237 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f00afd-c39a-409f-ba5e-b5474959717b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927264 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/62f00afd-c39a-409f-ba5e-b5474959717b-lock\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927333 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8864222f-1851-4967-a203-84babe8f5dc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8864222f-1851-4967-a203-84babe8f5dc7\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927426 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927441 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927450 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.927462 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bacc05-479b-49fb-bc82-7ca655524842-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.928412 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/62f00afd-c39a-409f-ba5e-b5474959717b-cache\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: E0122 15:43:07.928759 4825 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 15:43:07 crc kubenswrapper[4825]: E0122 15:43:07.928788 4825 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 15:43:07 crc kubenswrapper[4825]: E0122 15:43:07.928839 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift podName:62f00afd-c39a-409f-ba5e-b5474959717b nodeName:}" failed. No retries permitted until 2026-01-22 15:43:08.428819713 +0000 UTC m=+1135.190346623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift") pod "swift-storage-0" (UID: "62f00afd-c39a-409f-ba5e-b5474959717b") : configmap "swift-ring-files" not found Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.929634 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/62f00afd-c39a-409f-ba5e-b5474959717b-lock\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.932611 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f00afd-c39a-409f-ba5e-b5474959717b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.933758 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.933792 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8864222f-1851-4967-a203-84babe8f5dc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8864222f-1851-4967-a203-84babe8f5dc7\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/96a8e69a62b64e6dc9408ab594e6b21cc8a63b609d19b1b5964f4d9673b45db3/globalmount\"" pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.958879 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9m2\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-kube-api-access-gq9m2\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:07 crc kubenswrapper[4825]: I0122 15:43:07.985847 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8864222f-1851-4967-a203-84babe8f5dc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8864222f-1851-4967-a203-84babe8f5dc7\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.293348 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-zq89m"] Jan 22 15:43:08 crc kubenswrapper[4825]: E0122 15:43:08.294054 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="dnsmasq-dns" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.294079 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="dnsmasq-dns" Jan 22 15:43:08 crc kubenswrapper[4825]: E0122 15:43:08.294114 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="init" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.294120 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="init" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.294768 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bacc05-479b-49fb-bc82-7ca655524842" containerName="dnsmasq-dns" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.295492 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.302375 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zq89m"] Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.403561 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1cd2-account-create-update-8wkl6"] Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.404821 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.407198 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.419818 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1cd2-account-create-update-8wkl6"] Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.437298 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xh6\" (UniqueName: \"kubernetes.io/projected/08d465c3-ddd8-4a39-8b52-6df888237aa0-kube-api-access-64xh6\") pod \"glance-db-create-zq89m\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.437484 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.437611 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08d465c3-ddd8-4a39-8b52-6df888237aa0-operator-scripts\") pod \"glance-db-create-zq89m\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: E0122 15:43:08.437825 4825 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 15:43:08 crc kubenswrapper[4825]: E0122 15:43:08.437861 4825 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 15:43:08 crc kubenswrapper[4825]: E0122 15:43:08.437921 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift podName:62f00afd-c39a-409f-ba5e-b5474959717b nodeName:}" failed. No retries permitted until 2026-01-22 15:43:09.43790271 +0000 UTC m=+1136.199429620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift") pod "swift-storage-0" (UID: "62f00afd-c39a-409f-ba5e-b5474959717b") : configmap "swift-ring-files" not found Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.539509 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xh6\" (UniqueName: \"kubernetes.io/projected/08d465c3-ddd8-4a39-8b52-6df888237aa0-kube-api-access-64xh6\") pod \"glance-db-create-zq89m\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.539669 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08d465c3-ddd8-4a39-8b52-6df888237aa0-operator-scripts\") pod \"glance-db-create-zq89m\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.539703 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e423be-52ff-4474-af1b-472639d2b618-operator-scripts\") pod \"glance-1cd2-account-create-update-8wkl6\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.539735 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj58h\" (UniqueName: \"kubernetes.io/projected/34e423be-52ff-4474-af1b-472639d2b618-kube-api-access-qj58h\") pod \"glance-1cd2-account-create-update-8wkl6\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.540852 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08d465c3-ddd8-4a39-8b52-6df888237aa0-operator-scripts\") pod \"glance-db-create-zq89m\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.574771 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xh6\" (UniqueName: \"kubernetes.io/projected/08d465c3-ddd8-4a39-8b52-6df888237aa0-kube-api-access-64xh6\") pod \"glance-db-create-zq89m\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.627688 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zq89m" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.641737 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e423be-52ff-4474-af1b-472639d2b618-operator-scripts\") pod \"glance-1cd2-account-create-update-8wkl6\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.641800 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj58h\" (UniqueName: \"kubernetes.io/projected/34e423be-52ff-4474-af1b-472639d2b618-kube-api-access-qj58h\") pod \"glance-1cd2-account-create-update-8wkl6\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.643773 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e423be-52ff-4474-af1b-472639d2b618-operator-scripts\") pod \"glance-1cd2-account-create-update-8wkl6\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.649430 4825 generic.go:334] "Generic (PLEG): container finished" podID="37522712-d1ed-4a4d-ae99-c8fa95502dc1" containerID="2444970279a7d3f2048b11a29bbf6a1a801c6bcf6a16405f6fac3b7d2e96ce41" exitCode=0 Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.649511 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e3f-account-create-update-4b6j6" event={"ID":"37522712-d1ed-4a4d-ae99-c8fa95502dc1","Type":"ContainerDied","Data":"2444970279a7d3f2048b11a29bbf6a1a801c6bcf6a16405f6fac3b7d2e96ce41"} Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.655202 4825 generic.go:334] "Generic (PLEG): container finished" podID="3d53d147-8362-48e9-b525-44249e49ae01" containerID="d714f29a3362d4285848e566dd1864b577f3eb2114a4830e8e6532627c8c56ae" exitCode=0 Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.655316 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sz7gc" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.657059 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" event={"ID":"3d53d147-8362-48e9-b525-44249e49ae01","Type":"ContainerDied","Data":"d714f29a3362d4285848e566dd1864b577f3eb2114a4830e8e6532627c8c56ae"} Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.657110 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" event={"ID":"3d53d147-8362-48e9-b525-44249e49ae01","Type":"ContainerStarted","Data":"ddcea4628b3d8306e6d50e5d512a15a6df54c690ca0ca23634307cb0c20f12f2"} Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.701620 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj58h\" (UniqueName: \"kubernetes.io/projected/34e423be-52ff-4474-af1b-472639d2b618-kube-api-access-qj58h\") pod \"glance-1cd2-account-create-update-8wkl6\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.726249 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.840509 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sz7gc"] Jan 22 15:43:08 crc kubenswrapper[4825]: I0122 15:43:08.848885 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sz7gc"] Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.018008 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="57cba631-503b-4795-8463-3d1e50957d58" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.531218 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:09 crc kubenswrapper[4825]: E0122 15:43:09.531468 4825 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 15:43:09 crc kubenswrapper[4825]: E0122 15:43:09.531620 4825 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 15:43:09 crc kubenswrapper[4825]: E0122 15:43:09.531702 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift podName:62f00afd-c39a-409f-ba5e-b5474959717b nodeName:}" failed. No retries permitted until 2026-01-22 15:43:11.531679645 +0000 UTC m=+1138.293206555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift") pod "swift-storage-0" (UID: "62f00afd-c39a-409f-ba5e-b5474959717b") : configmap "swift-ring-files" not found Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.531793 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bacc05-479b-49fb-bc82-7ca655524842" path="/var/lib/kubelet/pods/63bacc05-479b-49fb-bc82-7ca655524842/volumes" Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.933454 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sqf6r"] Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.935894 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.940869 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 15:43:09 crc kubenswrapper[4825]: I0122 15:43:09.944881 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sqf6r"] Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.119962 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f70dfc2d-40bb-4259-a731-08503cf4b183-operator-scripts\") pod \"root-account-create-update-sqf6r\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.120466 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5hf\" (UniqueName: \"kubernetes.io/projected/f70dfc2d-40bb-4259-a731-08503cf4b183-kube-api-access-zv5hf\") pod \"root-account-create-update-sqf6r\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.222718 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f70dfc2d-40bb-4259-a731-08503cf4b183-operator-scripts\") pod \"root-account-create-update-sqf6r\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.223140 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv5hf\" (UniqueName: \"kubernetes.io/projected/f70dfc2d-40bb-4259-a731-08503cf4b183-kube-api-access-zv5hf\") pod \"root-account-create-update-sqf6r\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.223746 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f70dfc2d-40bb-4259-a731-08503cf4b183-operator-scripts\") pod \"root-account-create-update-sqf6r\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.252461 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv5hf\" (UniqueName: \"kubernetes.io/projected/f70dfc2d-40bb-4259-a731-08503cf4b183-kube-api-access-zv5hf\") pod \"root-account-create-update-sqf6r\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.271525 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.574610 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vffs5"] Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.576481 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.583461 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.583876 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.584347 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.606062 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vffs5"] Jan 22 15:43:10 crc kubenswrapper[4825]: E0122 15:43:10.606891 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-66tnc ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-66tnc ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-vffs5" podUID="fe316398-2221-47e0-bed1-ce50e6ca13e5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631127 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-dispersionconf\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631185 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-scripts\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631218 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-swiftconf\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631237 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tnc\" (UniqueName: \"kubernetes.io/projected/fe316398-2221-47e0-bed1-ce50e6ca13e5-kube-api-access-66tnc\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631299 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-ring-data-devices\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631324 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-combined-ca-bundle\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.631368 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fe316398-2221-47e0-bed1-ce50e6ca13e5-etc-swift\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.635128 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-z6dh6"] Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.636573 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.648449 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-z6dh6"] Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.656854 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vffs5"] Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.677902 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.689881 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732781 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fe316398-2221-47e0-bed1-ce50e6ca13e5-etc-swift\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732856 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-ring-data-devices\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732882 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-dispersionconf\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732900 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-etc-swift\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732925 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-scripts\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732940 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-swiftconf\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.732961 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-combined-ca-bundle\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733003 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-swiftconf\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733023 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tnc\" (UniqueName: \"kubernetes.io/projected/fe316398-2221-47e0-bed1-ce50e6ca13e5-kube-api-access-66tnc\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733066 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spv4q\" (UniqueName: \"kubernetes.io/projected/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-kube-api-access-spv4q\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733098 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-scripts\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733120 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-ring-data-devices\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733139 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-dispersionconf\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733160 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-combined-ca-bundle\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.733270 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fe316398-2221-47e0-bed1-ce50e6ca13e5-etc-swift\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.734032 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-scripts\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.734193 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-ring-data-devices\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.740481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-swiftconf\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.743380 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-dispersionconf\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.752142 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-combined-ca-bundle\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.753588 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tnc\" (UniqueName: \"kubernetes.io/projected/fe316398-2221-47e0-bed1-ce50e6ca13e5-kube-api-access-66tnc\") pod \"swift-ring-rebalance-vffs5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.836705 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spv4q\" (UniqueName: \"kubernetes.io/projected/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-kube-api-access-spv4q\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.837181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-scripts\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.837246 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-dispersionconf\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.837452 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-ring-data-devices\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.837515 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-etc-swift\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.837590 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-swiftconf\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.837655 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-combined-ca-bundle\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:10 crc kubenswrapper[4825]: I0122 15:43:10.838921 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-scripts\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011055 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fe316398-2221-47e0-bed1-ce50e6ca13e5-etc-swift\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011148 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-swiftconf\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011202 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-dispersionconf\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011254 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-scripts\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011301 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66tnc\" (UniqueName: \"kubernetes.io/projected/fe316398-2221-47e0-bed1-ce50e6ca13e5-kube-api-access-66tnc\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011356 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-combined-ca-bundle\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.011384 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-ring-data-devices\") pod \"fe316398-2221-47e0-bed1-ce50e6ca13e5\" (UID: \"fe316398-2221-47e0-bed1-ce50e6ca13e5\") " Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.012181 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.013439 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe316398-2221-47e0-bed1-ce50e6ca13e5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.016490 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-scripts" (OuterVolumeSpecName: "scripts") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.017110 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.026480 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.026547 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe316398-2221-47e0-bed1-ce50e6ca13e5-kube-api-access-66tnc" (OuterVolumeSpecName: "kube-api-access-66tnc") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "kube-api-access-66tnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.026567 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fe316398-2221-47e0-bed1-ce50e6ca13e5" (UID: "fe316398-2221-47e0-bed1-ce50e6ca13e5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.027282 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-ring-data-devices\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.033125 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-combined-ca-bundle\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.033657 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-etc-swift\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.033925 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-dispersionconf\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.038362 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-swiftconf\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.044634 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spv4q\" (UniqueName: \"kubernetes.io/projected/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-kube-api-access-spv4q\") pod \"swift-ring-rebalance-z6dh6\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156735 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156789 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66tnc\" (UniqueName: \"kubernetes.io/projected/fe316398-2221-47e0-bed1-ce50e6ca13e5-kube-api-access-66tnc\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156819 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156860 4825 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fe316398-2221-47e0-bed1-ce50e6ca13e5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156889 4825 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fe316398-2221-47e0-bed1-ce50e6ca13e5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156906 4825 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.156925 4825 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fe316398-2221-47e0-bed1-ce50e6ca13e5-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.255455 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.565895 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:11 crc kubenswrapper[4825]: E0122 15:43:11.566198 4825 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 15:43:11 crc kubenswrapper[4825]: E0122 15:43:11.566216 4825 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 15:43:11 crc kubenswrapper[4825]: E0122 15:43:11.566257 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift podName:62f00afd-c39a-409f-ba5e-b5474959717b nodeName:}" failed. No retries permitted until 2026-01-22 15:43:15.566243152 +0000 UTC m=+1142.327770062 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift") pod "swift-storage-0" (UID: "62f00afd-c39a-409f-ba5e-b5474959717b") : configmap "swift-ring-files" not found Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.696477 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vffs5" Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.834243 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vffs5"] Jan 22 15:43:11 crc kubenswrapper[4825]: I0122 15:43:11.843758 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-vffs5"] Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.176889 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.471223 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cl5vd"] Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.472834 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.484412 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cl5vd"] Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.662392 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr98t\" (UniqueName: \"kubernetes.io/projected/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-kube-api-access-gr98t\") pod \"keystone-db-create-cl5vd\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.662876 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-operator-scripts\") pod \"keystone-db-create-cl5vd\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.682612 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8142-account-create-update-znkfn"] Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.684938 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.690669 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.694490 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8142-account-create-update-znkfn"] Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.764324 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84dz6\" (UniqueName: \"kubernetes.io/projected/d2ceb787-de8b-4252-981e-818c4ca7c79c-kube-api-access-84dz6\") pod \"keystone-8142-account-create-update-znkfn\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.764416 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-operator-scripts\") pod \"keystone-db-create-cl5vd\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.764483 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ceb787-de8b-4252-981e-818c4ca7c79c-operator-scripts\") pod \"keystone-8142-account-create-update-znkfn\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.764527 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr98t\" (UniqueName: \"kubernetes.io/projected/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-kube-api-access-gr98t\") pod \"keystone-db-create-cl5vd\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.765260 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-operator-scripts\") pod \"keystone-db-create-cl5vd\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.781327 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr98t\" (UniqueName: \"kubernetes.io/projected/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-kube-api-access-gr98t\") pod \"keystone-db-create-cl5vd\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.866933 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84dz6\" (UniqueName: \"kubernetes.io/projected/d2ceb787-de8b-4252-981e-818c4ca7c79c-kube-api-access-84dz6\") pod \"keystone-8142-account-create-update-znkfn\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.867053 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ceb787-de8b-4252-981e-818c4ca7c79c-operator-scripts\") pod \"keystone-8142-account-create-update-znkfn\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.867813 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ceb787-de8b-4252-981e-818c4ca7c79c-operator-scripts\") pod \"keystone-8142-account-create-update-znkfn\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.888620 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84dz6\" (UniqueName: \"kubernetes.io/projected/d2ceb787-de8b-4252-981e-818c4ca7c79c-kube-api-access-84dz6\") pod \"keystone-8142-account-create-update-znkfn\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:12 crc kubenswrapper[4825]: I0122 15:43:12.899778 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:13 crc kubenswrapper[4825]: I0122 15:43:13.001334 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:13 crc kubenswrapper[4825]: I0122 15:43:13.532787 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe316398-2221-47e0-bed1-ce50e6ca13e5" path="/var/lib/kubelet/pods/fe316398-2221-47e0-bed1-ce50e6ca13e5/volumes" Jan 22 15:43:14 crc kubenswrapper[4825]: I0122 15:43:14.732139 4825 generic.go:334] "Generic (PLEG): container finished" podID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerID="c47a51e689e8e6934dbe0f9c52428877a4be4d4087bcabd749f2d7315b443e0c" exitCode=0 Jan 22 15:43:14 crc kubenswrapper[4825]: I0122 15:43:14.733267 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45e6f05d-8a80-49ca-add6-e8c41572b664","Type":"ContainerDied","Data":"c47a51e689e8e6934dbe0f9c52428877a4be4d4087bcabd749f2d7315b443e0c"} Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.628514 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:15 crc kubenswrapper[4825]: E0122 15:43:15.629685 4825 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 15:43:15 crc kubenswrapper[4825]: E0122 15:43:15.629702 4825 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 15:43:15 crc kubenswrapper[4825]: E0122 15:43:15.629753 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift podName:62f00afd-c39a-409f-ba5e-b5474959717b nodeName:}" failed. No retries permitted until 2026-01-22 15:43:23.629727652 +0000 UTC m=+1150.391254572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift") pod "swift-storage-0" (UID: "62f00afd-c39a-409f-ba5e-b5474959717b") : configmap "swift-ring-files" not found Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.632243 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-snszk" podUID="306a03b3-2cdb-494a-ab5b-51d80fe3586c" containerName="ovn-controller" probeResult="failure" output=< Jan 22 15:43:15 crc kubenswrapper[4825]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 15:43:15 crc kubenswrapper[4825]: > Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.646592 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.682550 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bkgcs" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.768304 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-97nzk" event={"ID":"3b017023-1c9c-4ff7-9f21-8370aa38cc26","Type":"ContainerDied","Data":"272c288ea5d43008249180e4bbe003aa0f55786481255c3b6e7cdcf335f90a07"} Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.768368 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272c288ea5d43008249180e4bbe003aa0f55786481255c3b6e7cdcf335f90a07" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.770371 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8e3f-account-create-update-4b6j6" event={"ID":"37522712-d1ed-4a4d-ae99-c8fa95502dc1","Type":"ContainerDied","Data":"ddb5bfd6e16eed5386825fba146882a12535518b5a8e1afade6c048203ae57b1"} Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.770635 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb5bfd6e16eed5386825fba146882a12535518b5a8e1afade6c048203ae57b1" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.916571 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-snszk-config-m84jn"] Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.917932 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.922545 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.927837 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-snszk-config-m84jn"] Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.941601 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97nzk" Jan 22 15:43:15 crc kubenswrapper[4825]: I0122 15:43:15.957146 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.034700 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37522712-d1ed-4a4d-ae99-c8fa95502dc1-operator-scripts\") pod \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.034907 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wsd5\" (UniqueName: \"kubernetes.io/projected/37522712-d1ed-4a4d-ae99-c8fa95502dc1-kube-api-access-7wsd5\") pod \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\" (UID: \"37522712-d1ed-4a4d-ae99-c8fa95502dc1\") " Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.034964 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b017023-1c9c-4ff7-9f21-8370aa38cc26-operator-scripts\") pod \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035165 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xl9\" (UniqueName: \"kubernetes.io/projected/3b017023-1c9c-4ff7-9f21-8370aa38cc26-kube-api-access-v5xl9\") pod \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\" (UID: \"3b017023-1c9c-4ff7-9f21-8370aa38cc26\") " Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035580 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-log-ovn\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035661 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-scripts\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035708 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run-ovn\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035749 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035792 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6gq\" (UniqueName: \"kubernetes.io/projected/df616b59-77cf-4694-aa6a-042b8eb13a39-kube-api-access-kj6gq\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.035866 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-additional-scripts\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.040266 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37522712-d1ed-4a4d-ae99-c8fa95502dc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37522712-d1ed-4a4d-ae99-c8fa95502dc1" (UID: "37522712-d1ed-4a4d-ae99-c8fa95502dc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.045969 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b017023-1c9c-4ff7-9f21-8370aa38cc26-kube-api-access-v5xl9" (OuterVolumeSpecName: "kube-api-access-v5xl9") pod "3b017023-1c9c-4ff7-9f21-8370aa38cc26" (UID: "3b017023-1c9c-4ff7-9f21-8370aa38cc26"). InnerVolumeSpecName "kube-api-access-v5xl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.053441 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b017023-1c9c-4ff7-9f21-8370aa38cc26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b017023-1c9c-4ff7-9f21-8370aa38cc26" (UID: "3b017023-1c9c-4ff7-9f21-8370aa38cc26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.060510 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37522712-d1ed-4a4d-ae99-c8fa95502dc1-kube-api-access-7wsd5" (OuterVolumeSpecName: "kube-api-access-7wsd5") pod "37522712-d1ed-4a4d-ae99-c8fa95502dc1" (UID: "37522712-d1ed-4a4d-ae99-c8fa95502dc1"). InnerVolumeSpecName "kube-api-access-7wsd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138266 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-log-ovn\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138331 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-scripts\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138363 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run-ovn\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138417 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6gq\" (UniqueName: \"kubernetes.io/projected/df616b59-77cf-4694-aa6a-042b8eb13a39-kube-api-access-kj6gq\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138458 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-additional-scripts\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138511 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5xl9\" (UniqueName: \"kubernetes.io/projected/3b017023-1c9c-4ff7-9f21-8370aa38cc26-kube-api-access-v5xl9\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138522 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37522712-d1ed-4a4d-ae99-c8fa95502dc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138531 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wsd5\" (UniqueName: \"kubernetes.io/projected/37522712-d1ed-4a4d-ae99-c8fa95502dc1-kube-api-access-7wsd5\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.138539 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b017023-1c9c-4ff7-9f21-8370aa38cc26-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.139112 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run-ovn\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.139196 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-log-ovn\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.139235 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.139601 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-additional-scripts\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.140953 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-scripts\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.171463 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6gq\" (UniqueName: \"kubernetes.io/projected/df616b59-77cf-4694-aa6a-042b8eb13a39-kube-api-access-kj6gq\") pod \"ovn-controller-snszk-config-m84jn\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.275809 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.522549 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zq89m"] Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.552870 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1cd2-account-create-update-8wkl6"] Jan 22 15:43:16 crc kubenswrapper[4825]: W0122 15:43:16.562243 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d465c3_ddd8_4a39_8b52_6df888237aa0.slice/crio-b649bed9cd4a6077ec067c5a2a1a00184e8c5e9f8ff8a6dfe2edb21ae7a7739c WatchSource:0}: Error finding container b649bed9cd4a6077ec067c5a2a1a00184e8c5e9f8ff8a6dfe2edb21ae7a7739c: Status 404 returned error can't find the container with id b649bed9cd4a6077ec067c5a2a1a00184e8c5e9f8ff8a6dfe2edb21ae7a7739c Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.579723 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.791030 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-z6dh6"] Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.821293 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zq89m" event={"ID":"08d465c3-ddd8-4a39-8b52-6df888237aa0","Type":"ContainerStarted","Data":"b649bed9cd4a6077ec067c5a2a1a00184e8c5e9f8ff8a6dfe2edb21ae7a7739c"} Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.837774 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8142-account-create-update-znkfn"] Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.873498 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45e6f05d-8a80-49ca-add6-e8c41572b664","Type":"ContainerStarted","Data":"020f01fa01c7531efac312a1a4ee10db30b605df6436c83ee37b61635da0a2e3"} Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.874288 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.881362 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.881775 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cl5vd"] Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.888345 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" event={"ID":"3d53d147-8362-48e9-b525-44249e49ae01","Type":"ContainerStarted","Data":"dc4e83a0b3ba33d512d64c0bb483c49691ed9e4523e60de0ca68207752301676"} Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.889591 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.943932 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerStarted","Data":"5dc8bcc6d2882d133f792e5ea4a3405374deb065b251034dd6dfdf8a7a565469"} Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.959280 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8e3f-account-create-update-4b6j6" Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.964056 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1cd2-account-create-update-8wkl6" event={"ID":"34e423be-52ff-4474-af1b-472639d2b618","Type":"ContainerStarted","Data":"2273fa53b3419ae25e7b64c4ce02478cffd41eea525578e6b6f3e1e1b4e7ec6d"} Jan 22 15:43:16 crc kubenswrapper[4825]: I0122 15:43:16.964227 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-97nzk" Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.011305 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371947.8435 podStartE2EDuration="1m29.011276305s" podCreationTimestamp="2026-01-22 15:41:48 +0000 UTC" firstStartedPulling="2026-01-22 15:41:50.661939825 +0000 UTC m=+1057.423466735" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:16.957616111 +0000 UTC m=+1143.719143021" watchObservedRunningTime="2026-01-22 15:43:17.011276305 +0000 UTC m=+1143.772803215" Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.036554 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.080426 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sqf6r"] Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.141530 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" podStartSLOduration=11.141510179 podStartE2EDuration="11.141510179s" podCreationTimestamp="2026-01-22 15:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:17.036788035 +0000 UTC m=+1143.798314945" watchObservedRunningTime="2026-01-22 15:43:17.141510179 +0000 UTC m=+1143.903037089" Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.242842 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-snszk-config-m84jn"] Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.970192 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8142-account-create-update-znkfn" event={"ID":"d2ceb787-de8b-4252-981e-818c4ca7c79c","Type":"ContainerStarted","Data":"190abddd8a55a3fe35d6e0419f6953c9e82ba75e24580202d90e0a38e48e7c21"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.970530 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8142-account-create-update-znkfn" event={"ID":"d2ceb787-de8b-4252-981e-818c4ca7c79c","Type":"ContainerStarted","Data":"3a27df17915a3a78c0fa8959d720a0508a916c03f16bf801de385b629f5f7912"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.972106 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z6dh6" event={"ID":"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2","Type":"ContainerStarted","Data":"3dbef76531505da97891058ecad8fc461cc3722b251b94792a9665d5a77b0245"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.979633 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cl5vd" event={"ID":"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c","Type":"ContainerStarted","Data":"9c942a2d0f8f8c7ebdeac5233b4f19f183a6ffd6d67d42b9672aa9b0b7f9341f"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.979679 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cl5vd" event={"ID":"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c","Type":"ContainerStarted","Data":"4495adf890a17e66f1afcc39011aca78753ea9dbabf5cc46f350d671b3c86ab8"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.983105 4825 generic.go:334] "Generic (PLEG): container finished" podID="08d465c3-ddd8-4a39-8b52-6df888237aa0" containerID="3e2a7a9b753421aa14d7c0941c26bc0857666c5b29001d65ec2b8f8f6b4b7356" exitCode=0 Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.983154 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zq89m" event={"ID":"08d465c3-ddd8-4a39-8b52-6df888237aa0","Type":"ContainerDied","Data":"3e2a7a9b753421aa14d7c0941c26bc0857666c5b29001d65ec2b8f8f6b4b7356"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.985848 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqf6r" event={"ID":"f70dfc2d-40bb-4259-a731-08503cf4b183","Type":"ContainerStarted","Data":"a673f8887881f54506ef484bd7fdd9ac65362dad0df6b28c96294289ab4eac9d"} Jan 22 15:43:17 crc kubenswrapper[4825]: I0122 15:43:17.985888 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqf6r" event={"ID":"f70dfc2d-40bb-4259-a731-08503cf4b183","Type":"ContainerStarted","Data":"a408b56650d3731e9bb37a05f626b61d24a6a4182946130f2b7ef75a312ff6ec"} Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.002669 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1cd2-account-create-update-8wkl6" event={"ID":"34e423be-52ff-4474-af1b-472639d2b618","Type":"ContainerStarted","Data":"4f52553f8b9f2a304e23d98027383e178b8a7739cd54e0b35aa80646129ac535"} Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.004937 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8142-account-create-update-znkfn" podStartSLOduration=6.004915448 podStartE2EDuration="6.004915448s" podCreationTimestamp="2026-01-22 15:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:17.99659498 +0000 UTC m=+1144.758121890" watchObservedRunningTime="2026-01-22 15:43:18.004915448 +0000 UTC m=+1144.766442358" Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.010332 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-m84jn" event={"ID":"df616b59-77cf-4694-aa6a-042b8eb13a39","Type":"ContainerStarted","Data":"ad3f02ef8e31a96f53c232cd54cbc40bb4131c68f786f32bfdc467a20dc5f556"} Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.010370 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-m84jn" event={"ID":"df616b59-77cf-4694-aa6a-042b8eb13a39","Type":"ContainerStarted","Data":"b7c4638b54ba31e8698d521ea15a28a8e27b4860fbbcab9052f20a3ab4f1f341"} Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.031086 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-sqf6r" podStartSLOduration=9.031066866 podStartE2EDuration="9.031066866s" podCreationTimestamp="2026-01-22 15:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:18.021632916 +0000 UTC m=+1144.783159826" watchObservedRunningTime="2026-01-22 15:43:18.031066866 +0000 UTC m=+1144.792593776" Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.204088 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1cd2-account-create-update-8wkl6" podStartSLOduration=10.204064442 podStartE2EDuration="10.204064442s" podCreationTimestamp="2026-01-22 15:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:18.178232284 +0000 UTC m=+1144.939759194" watchObservedRunningTime="2026-01-22 15:43:18.204064442 +0000 UTC m=+1144.965591352" Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.219700 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-snszk-config-m84jn" podStartSLOduration=3.219677629 podStartE2EDuration="3.219677629s" podCreationTimestamp="2026-01-22 15:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:18.2053834 +0000 UTC m=+1144.966910310" watchObservedRunningTime="2026-01-22 15:43:18.219677629 +0000 UTC m=+1144.981204529" Jan 22 15:43:18 crc kubenswrapper[4825]: I0122 15:43:18.904940 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="57cba631-503b-4795-8463-3d1e50957d58" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.022057 4825 generic.go:334] "Generic (PLEG): container finished" podID="5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" containerID="9c942a2d0f8f8c7ebdeac5233b4f19f183a6ffd6d67d42b9672aa9b0b7f9341f" exitCode=0 Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.022148 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cl5vd" event={"ID":"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c","Type":"ContainerDied","Data":"9c942a2d0f8f8c7ebdeac5233b4f19f183a6ffd6d67d42b9672aa9b0b7f9341f"} Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.028419 4825 generic.go:334] "Generic (PLEG): container finished" podID="f70dfc2d-40bb-4259-a731-08503cf4b183" containerID="a673f8887881f54506ef484bd7fdd9ac65362dad0df6b28c96294289ab4eac9d" exitCode=0 Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.028528 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqf6r" event={"ID":"f70dfc2d-40bb-4259-a731-08503cf4b183","Type":"ContainerDied","Data":"a673f8887881f54506ef484bd7fdd9ac65362dad0df6b28c96294289ab4eac9d"} Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.032094 4825 generic.go:334] "Generic (PLEG): container finished" podID="34e423be-52ff-4474-af1b-472639d2b618" containerID="4f52553f8b9f2a304e23d98027383e178b8a7739cd54e0b35aa80646129ac535" exitCode=0 Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.032207 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1cd2-account-create-update-8wkl6" event={"ID":"34e423be-52ff-4474-af1b-472639d2b618","Type":"ContainerDied","Data":"4f52553f8b9f2a304e23d98027383e178b8a7739cd54e0b35aa80646129ac535"} Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.058529 4825 generic.go:334] "Generic (PLEG): container finished" podID="df616b59-77cf-4694-aa6a-042b8eb13a39" containerID="ad3f02ef8e31a96f53c232cd54cbc40bb4131c68f786f32bfdc467a20dc5f556" exitCode=0 Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.059038 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-m84jn" event={"ID":"df616b59-77cf-4694-aa6a-042b8eb13a39","Type":"ContainerDied","Data":"ad3f02ef8e31a96f53c232cd54cbc40bb4131c68f786f32bfdc467a20dc5f556"} Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.074234 4825 generic.go:334] "Generic (PLEG): container finished" podID="d2ceb787-de8b-4252-981e-818c4ca7c79c" containerID="190abddd8a55a3fe35d6e0419f6953c9e82ba75e24580202d90e0a38e48e7c21" exitCode=0 Jan 22 15:43:19 crc kubenswrapper[4825]: I0122 15:43:19.075952 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8142-account-create-update-znkfn" event={"ID":"d2ceb787-de8b-4252-981e-818c4ca7c79c","Type":"ContainerDied","Data":"190abddd8a55a3fe35d6e0419f6953c9e82ba75e24580202d90e0a38e48e7c21"} Jan 22 15:43:20 crc kubenswrapper[4825]: I0122 15:43:20.422479 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:43:20 crc kubenswrapper[4825]: I0122 15:43:20.650323 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-snszk" Jan 22 15:43:21 crc kubenswrapper[4825]: I0122 15:43:21.952362 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.030013 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dfcq6"] Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.030308 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="dnsmasq-dns" containerID="cri-o://bbacf0dcb7628cfd229bb4a5d96334ecd2b31115d165a1936ee4d7f8d6aba649" gracePeriod=10 Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.832163 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zq89m" Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.976051 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xh6\" (UniqueName: \"kubernetes.io/projected/08d465c3-ddd8-4a39-8b52-6df888237aa0-kube-api-access-64xh6\") pod \"08d465c3-ddd8-4a39-8b52-6df888237aa0\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.976209 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08d465c3-ddd8-4a39-8b52-6df888237aa0-operator-scripts\") pod \"08d465c3-ddd8-4a39-8b52-6df888237aa0\" (UID: \"08d465c3-ddd8-4a39-8b52-6df888237aa0\") " Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.977452 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d465c3-ddd8-4a39-8b52-6df888237aa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08d465c3-ddd8-4a39-8b52-6df888237aa0" (UID: "08d465c3-ddd8-4a39-8b52-6df888237aa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.985549 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d465c3-ddd8-4a39-8b52-6df888237aa0-kube-api-access-64xh6" (OuterVolumeSpecName: "kube-api-access-64xh6") pod "08d465c3-ddd8-4a39-8b52-6df888237aa0" (UID: "08d465c3-ddd8-4a39-8b52-6df888237aa0"). InnerVolumeSpecName "kube-api-access-64xh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:22 crc kubenswrapper[4825]: I0122 15:43:22.987597 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.064398 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.091988 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xh6\" (UniqueName: \"kubernetes.io/projected/08d465c3-ddd8-4a39-8b52-6df888237aa0-kube-api-access-64xh6\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.092024 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08d465c3-ddd8-4a39-8b52-6df888237aa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.093877 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.097418 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.125007 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247417 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-scripts\") pod \"df616b59-77cf-4694-aa6a-042b8eb13a39\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247461 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv5hf\" (UniqueName: \"kubernetes.io/projected/f70dfc2d-40bb-4259-a731-08503cf4b183-kube-api-access-zv5hf\") pod \"f70dfc2d-40bb-4259-a731-08503cf4b183\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247493 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f70dfc2d-40bb-4259-a731-08503cf4b183-operator-scripts\") pod \"f70dfc2d-40bb-4259-a731-08503cf4b183\" (UID: \"f70dfc2d-40bb-4259-a731-08503cf4b183\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247552 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj6gq\" (UniqueName: \"kubernetes.io/projected/df616b59-77cf-4694-aa6a-042b8eb13a39-kube-api-access-kj6gq\") pod \"df616b59-77cf-4694-aa6a-042b8eb13a39\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247608 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-additional-scripts\") pod \"df616b59-77cf-4694-aa6a-042b8eb13a39\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247634 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-log-ovn\") pod \"df616b59-77cf-4694-aa6a-042b8eb13a39\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247651 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e423be-52ff-4474-af1b-472639d2b618-operator-scripts\") pod \"34e423be-52ff-4474-af1b-472639d2b618\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247712 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run\") pod \"df616b59-77cf-4694-aa6a-042b8eb13a39\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247740 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84dz6\" (UniqueName: \"kubernetes.io/projected/d2ceb787-de8b-4252-981e-818c4ca7c79c-kube-api-access-84dz6\") pod \"d2ceb787-de8b-4252-981e-818c4ca7c79c\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247766 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run-ovn\") pod \"df616b59-77cf-4694-aa6a-042b8eb13a39\" (UID: \"df616b59-77cf-4694-aa6a-042b8eb13a39\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247786 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ceb787-de8b-4252-981e-818c4ca7c79c-operator-scripts\") pod \"d2ceb787-de8b-4252-981e-818c4ca7c79c\" (UID: \"d2ceb787-de8b-4252-981e-818c4ca7c79c\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247812 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj58h\" (UniqueName: \"kubernetes.io/projected/34e423be-52ff-4474-af1b-472639d2b618-kube-api-access-qj58h\") pod \"34e423be-52ff-4474-af1b-472639d2b618\" (UID: \"34e423be-52ff-4474-af1b-472639d2b618\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247862 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr98t\" (UniqueName: \"kubernetes.io/projected/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-kube-api-access-gr98t\") pod \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.247894 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-operator-scripts\") pod \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\" (UID: \"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c\") " Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.250480 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e423be-52ff-4474-af1b-472639d2b618-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34e423be-52ff-4474-af1b-472639d2b618" (UID: "34e423be-52ff-4474-af1b-472639d2b618"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.253190 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "df616b59-77cf-4694-aa6a-042b8eb13a39" (UID: "df616b59-77cf-4694-aa6a-042b8eb13a39"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.253258 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run" (OuterVolumeSpecName: "var-run") pod "df616b59-77cf-4694-aa6a-042b8eb13a39" (UID: "df616b59-77cf-4694-aa6a-042b8eb13a39"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.253345 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "df616b59-77cf-4694-aa6a-042b8eb13a39" (UID: "df616b59-77cf-4694-aa6a-042b8eb13a39"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.253575 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70dfc2d-40bb-4259-a731-08503cf4b183-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f70dfc2d-40bb-4259-a731-08503cf4b183" (UID: "f70dfc2d-40bb-4259-a731-08503cf4b183"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.253886 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-scripts" (OuterVolumeSpecName: "scripts") pod "df616b59-77cf-4694-aa6a-042b8eb13a39" (UID: "df616b59-77cf-4694-aa6a-042b8eb13a39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.254011 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "df616b59-77cf-4694-aa6a-042b8eb13a39" (UID: "df616b59-77cf-4694-aa6a-042b8eb13a39"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.257042 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ceb787-de8b-4252-981e-818c4ca7c79c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2ceb787-de8b-4252-981e-818c4ca7c79c" (UID: "d2ceb787-de8b-4252-981e-818c4ca7c79c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.257968 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e423be-52ff-4474-af1b-472639d2b618-kube-api-access-qj58h" (OuterVolumeSpecName: "kube-api-access-qj58h") pod "34e423be-52ff-4474-af1b-472639d2b618" (UID: "34e423be-52ff-4474-af1b-472639d2b618"). InnerVolumeSpecName "kube-api-access-qj58h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.260570 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df616b59-77cf-4694-aa6a-042b8eb13a39-kube-api-access-kj6gq" (OuterVolumeSpecName: "kube-api-access-kj6gq") pod "df616b59-77cf-4694-aa6a-042b8eb13a39" (UID: "df616b59-77cf-4694-aa6a-042b8eb13a39"). InnerVolumeSpecName "kube-api-access-kj6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.261506 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" (UID: "5ecaf74d-a8a9-4fd4-91dc-841debd0df4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.264857 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-kube-api-access-gr98t" (OuterVolumeSpecName: "kube-api-access-gr98t") pod "5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" (UID: "5ecaf74d-a8a9-4fd4-91dc-841debd0df4c"). InnerVolumeSpecName "kube-api-access-gr98t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.264946 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70dfc2d-40bb-4259-a731-08503cf4b183-kube-api-access-zv5hf" (OuterVolumeSpecName: "kube-api-access-zv5hf") pod "f70dfc2d-40bb-4259-a731-08503cf4b183" (UID: "f70dfc2d-40bb-4259-a731-08503cf4b183"). InnerVolumeSpecName "kube-api-access-zv5hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.279568 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ceb787-de8b-4252-981e-818c4ca7c79c-kube-api-access-84dz6" (OuterVolumeSpecName: "kube-api-access-84dz6") pod "d2ceb787-de8b-4252-981e-818c4ca7c79c" (UID: "d2ceb787-de8b-4252-981e-818c4ca7c79c"). InnerVolumeSpecName "kube-api-access-84dz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.342844 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqf6r" event={"ID":"f70dfc2d-40bb-4259-a731-08503cf4b183","Type":"ContainerDied","Data":"a408b56650d3731e9bb37a05f626b61d24a6a4182946130f2b7ef75a312ff6ec"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.343083 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a408b56650d3731e9bb37a05f626b61d24a6a4182946130f2b7ef75a312ff6ec" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.342909 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqf6r" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.345859 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1cd2-account-create-update-8wkl6" event={"ID":"34e423be-52ff-4474-af1b-472639d2b618","Type":"ContainerDied","Data":"2273fa53b3419ae25e7b64c4ce02478cffd41eea525578e6b6f3e1e1b4e7ec6d"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.345972 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2273fa53b3419ae25e7b64c4ce02478cffd41eea525578e6b6f3e1e1b4e7ec6d" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.346105 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1cd2-account-create-update-8wkl6" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351790 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351822 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv5hf\" (UniqueName: \"kubernetes.io/projected/f70dfc2d-40bb-4259-a731-08503cf4b183-kube-api-access-zv5hf\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351835 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f70dfc2d-40bb-4259-a731-08503cf4b183-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351844 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj6gq\" (UniqueName: \"kubernetes.io/projected/df616b59-77cf-4694-aa6a-042b8eb13a39-kube-api-access-kj6gq\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351855 4825 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/df616b59-77cf-4694-aa6a-042b8eb13a39-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351866 4825 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351878 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34e423be-52ff-4474-af1b-472639d2b618-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351888 4825 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351899 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84dz6\" (UniqueName: \"kubernetes.io/projected/d2ceb787-de8b-4252-981e-818c4ca7c79c-kube-api-access-84dz6\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351910 4825 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df616b59-77cf-4694-aa6a-042b8eb13a39-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351921 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2ceb787-de8b-4252-981e-818c4ca7c79c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351931 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj58h\" (UniqueName: \"kubernetes.io/projected/34e423be-52ff-4474-af1b-472639d2b618-kube-api-access-qj58h\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351942 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr98t\" (UniqueName: \"kubernetes.io/projected/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-kube-api-access-gr98t\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.351953 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.352577 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-m84jn" event={"ID":"df616b59-77cf-4694-aa6a-042b8eb13a39","Type":"ContainerDied","Data":"b7c4638b54ba31e8698d521ea15a28a8e27b4860fbbcab9052f20a3ab4f1f341"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.352617 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c4638b54ba31e8698d521ea15a28a8e27b4860fbbcab9052f20a3ab4f1f341" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.352674 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-m84jn" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.362884 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8142-account-create-update-znkfn" event={"ID":"d2ceb787-de8b-4252-981e-818c4ca7c79c","Type":"ContainerDied","Data":"3a27df17915a3a78c0fa8959d720a0508a916c03f16bf801de385b629f5f7912"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.362921 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a27df17915a3a78c0fa8959d720a0508a916c03f16bf801de385b629f5f7912" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.363157 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8142-account-create-update-znkfn" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.368264 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cl5vd" event={"ID":"5ecaf74d-a8a9-4fd4-91dc-841debd0df4c","Type":"ContainerDied","Data":"4495adf890a17e66f1afcc39011aca78753ea9dbabf5cc46f350d671b3c86ab8"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.368306 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4495adf890a17e66f1afcc39011aca78753ea9dbabf5cc46f350d671b3c86ab8" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.368379 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cl5vd" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.373720 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zq89m" event={"ID":"08d465c3-ddd8-4a39-8b52-6df888237aa0","Type":"ContainerDied","Data":"b649bed9cd4a6077ec067c5a2a1a00184e8c5e9f8ff8a6dfe2edb21ae7a7739c"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.373758 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b649bed9cd4a6077ec067c5a2a1a00184e8c5e9f8ff8a6dfe2edb21ae7a7739c" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.373843 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zq89m" Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.379215 4825 generic.go:334] "Generic (PLEG): container finished" podID="8ed16250-e013-4590-a99d-55576235c7d9" containerID="bbacf0dcb7628cfd229bb4a5d96334ecd2b31115d165a1936ee4d7f8d6aba649" exitCode=0 Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.379259 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" event={"ID":"8ed16250-e013-4590-a99d-55576235c7d9","Type":"ContainerDied","Data":"bbacf0dcb7628cfd229bb4a5d96334ecd2b31115d165a1936ee4d7f8d6aba649"} Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.687263 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:23 crc kubenswrapper[4825]: E0122 15:43:23.687597 4825 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 15:43:23 crc kubenswrapper[4825]: E0122 15:43:23.687640 4825 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 15:43:23 crc kubenswrapper[4825]: E0122 15:43:23.687705 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift podName:62f00afd-c39a-409f-ba5e-b5474959717b nodeName:}" failed. No retries permitted until 2026-01-22 15:43:39.68768086 +0000 UTC m=+1166.449207770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift") pod "swift-storage-0" (UID: "62f00afd-c39a-409f-ba5e-b5474959717b") : configmap "swift-ring-files" not found Jan 22 15:43:23 crc kubenswrapper[4825]: I0122 15:43:23.928391 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.233295 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-snszk-config-m84jn"] Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.243344 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-snszk-config-m84jn"] Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306315 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-snszk-config-vxxmw"] Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306713 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df616b59-77cf-4694-aa6a-042b8eb13a39" containerName="ovn-config" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306730 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="df616b59-77cf-4694-aa6a-042b8eb13a39" containerName="ovn-config" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306744 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37522712-d1ed-4a4d-ae99-c8fa95502dc1" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306750 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="37522712-d1ed-4a4d-ae99-c8fa95502dc1" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306758 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306764 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306777 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b017023-1c9c-4ff7-9f21-8370aa38cc26" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306783 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b017023-1c9c-4ff7-9f21-8370aa38cc26" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306795 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d465c3-ddd8-4a39-8b52-6df888237aa0" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306800 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d465c3-ddd8-4a39-8b52-6df888237aa0" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306815 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ceb787-de8b-4252-981e-818c4ca7c79c" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306820 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ceb787-de8b-4252-981e-818c4ca7c79c" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306832 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70dfc2d-40bb-4259-a731-08503cf4b183" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306840 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70dfc2d-40bb-4259-a731-08503cf4b183" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: E0122 15:43:24.306848 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e423be-52ff-4474-af1b-472639d2b618" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.306854 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e423be-52ff-4474-af1b-472639d2b618" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307071 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e423be-52ff-4474-af1b-472639d2b618" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307088 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="df616b59-77cf-4694-aa6a-042b8eb13a39" containerName="ovn-config" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307105 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307125 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="37522712-d1ed-4a4d-ae99-c8fa95502dc1" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307143 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b017023-1c9c-4ff7-9f21-8370aa38cc26" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307157 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ceb787-de8b-4252-981e-818c4ca7c79c" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307174 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70dfc2d-40bb-4259-a731-08503cf4b183" containerName="mariadb-account-create-update" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307184 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d465c3-ddd8-4a39-8b52-6df888237aa0" containerName="mariadb-database-create" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.307831 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.313530 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.326134 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-snszk-config-vxxmw"] Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.399341 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run-ovn\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.399508 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-scripts\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.399545 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.399597 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-additional-scripts\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.399624 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqlf\" (UniqueName: \"kubernetes.io/projected/f9aa5eef-e111-480d-af3c-fd1ce8872a02-kube-api-access-gxqlf\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.399655 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-log-ovn\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.501298 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run-ovn\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.501455 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-scripts\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.501484 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.501536 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-additional-scripts\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.501569 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxqlf\" (UniqueName: \"kubernetes.io/projected/f9aa5eef-e111-480d-af3c-fd1ce8872a02-kube-api-access-gxqlf\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.501604 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-log-ovn\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.502043 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-log-ovn\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.502140 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run-ovn\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.505200 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-scripts\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.505301 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.505912 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-additional-scripts\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.538444 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxqlf\" (UniqueName: \"kubernetes.io/projected/f9aa5eef-e111-480d-af3c-fd1ce8872a02-kube-api-access-gxqlf\") pod \"ovn-controller-snszk-config-vxxmw\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:24 crc kubenswrapper[4825]: I0122 15:43:24.665551 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.531181 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df616b59-77cf-4694-aa6a-042b8eb13a39" path="/var/lib/kubelet/pods/df616b59-77cf-4694-aa6a-042b8eb13a39/volumes" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.554512 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.636019 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-snszk-config-vxxmw"] Jan 22 15:43:25 crc kubenswrapper[4825]: W0122 15:43:25.666541 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9aa5eef_e111_480d_af3c_fd1ce8872a02.slice/crio-6b899583ad9d65e7ae7ca1037875cc2bc1209228dc66b5e0bbabccb60721010f WatchSource:0}: Error finding container 6b899583ad9d65e7ae7ca1037875cc2bc1209228dc66b5e0bbabccb60721010f: Status 404 returned error can't find the container with id 6b899583ad9d65e7ae7ca1037875cc2bc1209228dc66b5e0bbabccb60721010f Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.691859 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc5nz\" (UniqueName: \"kubernetes.io/projected/8ed16250-e013-4590-a99d-55576235c7d9-kube-api-access-pc5nz\") pod \"8ed16250-e013-4590-a99d-55576235c7d9\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.692116 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-dns-svc\") pod \"8ed16250-e013-4590-a99d-55576235c7d9\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.692149 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-config\") pod \"8ed16250-e013-4590-a99d-55576235c7d9\" (UID: \"8ed16250-e013-4590-a99d-55576235c7d9\") " Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.701011 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed16250-e013-4590-a99d-55576235c7d9-kube-api-access-pc5nz" (OuterVolumeSpecName: "kube-api-access-pc5nz") pod "8ed16250-e013-4590-a99d-55576235c7d9" (UID: "8ed16250-e013-4590-a99d-55576235c7d9"). InnerVolumeSpecName "kube-api-access-pc5nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.770050 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-config" (OuterVolumeSpecName: "config") pod "8ed16250-e013-4590-a99d-55576235c7d9" (UID: "8ed16250-e013-4590-a99d-55576235c7d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.773061 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ed16250-e013-4590-a99d-55576235c7d9" (UID: "8ed16250-e013-4590-a99d-55576235c7d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.798784 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.798813 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed16250-e013-4590-a99d-55576235c7d9-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:25 crc kubenswrapper[4825]: I0122 15:43:25.798823 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc5nz\" (UniqueName: \"kubernetes.io/projected/8ed16250-e013-4590-a99d-55576235c7d9-kube-api-access-pc5nz\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.351265 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sqf6r"] Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.359029 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sqf6r"] Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.409012 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" event={"ID":"8ed16250-e013-4590-a99d-55576235c7d9","Type":"ContainerDied","Data":"bd32b52879303d81d820ae2f4f99dfa88486ae27d60f07ee8b22f894b1f93c13"} Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.409076 4825 scope.go:117] "RemoveContainer" containerID="bbacf0dcb7628cfd229bb4a5d96334ecd2b31115d165a1936ee4d7f8d6aba649" Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.409259 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dfcq6" Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.412746 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerStarted","Data":"10764b40f8f1de47f6ef2730e3a37d2f2353dc3474ddc2e67965b22003273634"} Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.417743 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-vxxmw" event={"ID":"f9aa5eef-e111-480d-af3c-fd1ce8872a02","Type":"ContainerStarted","Data":"0e4518939dfa38f9e7b7dcf8cc8c2f5d2fb4b6f1d9599c0b587eb94d42c0f21d"} Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.417784 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-vxxmw" event={"ID":"f9aa5eef-e111-480d-af3c-fd1ce8872a02","Type":"ContainerStarted","Data":"6b899583ad9d65e7ae7ca1037875cc2bc1209228dc66b5e0bbabccb60721010f"} Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.421917 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z6dh6" event={"ID":"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2","Type":"ContainerStarted","Data":"af2ef60351ba57d283e16b63c9a5973a736b0cd3ff72175973d2bf2b859820b4"} Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.442740 4825 scope.go:117] "RemoveContainer" containerID="b7e8ea50d17dc63739334d1f801e466a472ce0d595f3d26cdceb47621861f86e" Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.457338 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-snszk-config-vxxmw" podStartSLOduration=2.457316536 podStartE2EDuration="2.457316536s" podCreationTimestamp="2026-01-22 15:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:26.43681324 +0000 UTC m=+1153.198340170" watchObservedRunningTime="2026-01-22 15:43:26.457316536 +0000 UTC m=+1153.218843446" Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.472532 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dfcq6"] Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.483862 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dfcq6"] Jan 22 15:43:26 crc kubenswrapper[4825]: I0122 15:43:26.490460 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-z6dh6" podStartSLOduration=8.509494686 podStartE2EDuration="16.490434583s" podCreationTimestamp="2026-01-22 15:43:10 +0000 UTC" firstStartedPulling="2026-01-22 15:43:16.872907109 +0000 UTC m=+1143.634434019" lastFinishedPulling="2026-01-22 15:43:24.853847006 +0000 UTC m=+1151.615373916" observedRunningTime="2026-01-22 15:43:26.474808836 +0000 UTC m=+1153.236335746" watchObservedRunningTime="2026-01-22 15:43:26.490434583 +0000 UTC m=+1153.251961493" Jan 22 15:43:27 crc kubenswrapper[4825]: I0122 15:43:27.434048 4825 generic.go:334] "Generic (PLEG): container finished" podID="f9aa5eef-e111-480d-af3c-fd1ce8872a02" containerID="0e4518939dfa38f9e7b7dcf8cc8c2f5d2fb4b6f1d9599c0b587eb94d42c0f21d" exitCode=0 Jan 22 15:43:27 crc kubenswrapper[4825]: I0122 15:43:27.434133 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-vxxmw" event={"ID":"f9aa5eef-e111-480d-af3c-fd1ce8872a02","Type":"ContainerDied","Data":"0e4518939dfa38f9e7b7dcf8cc8c2f5d2fb4b6f1d9599c0b587eb94d42c0f21d"} Jan 22 15:43:27 crc kubenswrapper[4825]: I0122 15:43:27.538995 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed16250-e013-4590-a99d-55576235c7d9" path="/var/lib/kubelet/pods/8ed16250-e013-4590-a99d-55576235c7d9/volumes" Jan 22 15:43:27 crc kubenswrapper[4825]: I0122 15:43:27.539584 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70dfc2d-40bb-4259-a731-08503cf4b183" path="/var/lib/kubelet/pods/f70dfc2d-40bb-4259-a731-08503cf4b183/volumes" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.695926 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r6p27"] Jan 22 15:43:28 crc kubenswrapper[4825]: E0122 15:43:28.696763 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="dnsmasq-dns" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.696784 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="dnsmasq-dns" Jan 22 15:43:28 crc kubenswrapper[4825]: E0122 15:43:28.696801 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="init" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.696809 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="init" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.697129 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed16250-e013-4590-a99d-55576235c7d9" containerName="dnsmasq-dns" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.697916 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.702530 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qhjbr" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.702892 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.712480 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r6p27"] Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.740265 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7p8\" (UniqueName: \"kubernetes.io/projected/ffcfdefe-f831-469c-9423-6cd4399435a7-kube-api-access-df7p8\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.740313 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-config-data\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.740542 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-db-sync-config-data\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.740810 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-combined-ca-bundle\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.842313 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7p8\" (UniqueName: \"kubernetes.io/projected/ffcfdefe-f831-469c-9423-6cd4399435a7-kube-api-access-df7p8\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.842610 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-config-data\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.842688 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-db-sync-config-data\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.842729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-combined-ca-bundle\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.849247 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-combined-ca-bundle\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.852571 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-config-data\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.854748 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-db-sync-config-data\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.868881 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7p8\" (UniqueName: \"kubernetes.io/projected/ffcfdefe-f831-469c-9423-6cd4399435a7-kube-api-access-df7p8\") pod \"glance-db-sync-r6p27\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:28 crc kubenswrapper[4825]: I0122 15:43:28.898699 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="57cba631-503b-4795-8463-3d1e50957d58" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.030387 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6p27" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.288160 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.356758 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-log-ovn\") pod \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.357321 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run-ovn\") pod \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.357392 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxqlf\" (UniqueName: \"kubernetes.io/projected/f9aa5eef-e111-480d-af3c-fd1ce8872a02-kube-api-access-gxqlf\") pod \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.357426 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run\") pod \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.357542 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-scripts\") pod \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.357594 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-additional-scripts\") pod \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\" (UID: \"f9aa5eef-e111-480d-af3c-fd1ce8872a02\") " Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.357131 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f9aa5eef-e111-480d-af3c-fd1ce8872a02" (UID: "f9aa5eef-e111-480d-af3c-fd1ce8872a02"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.358805 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f9aa5eef-e111-480d-af3c-fd1ce8872a02" (UID: "f9aa5eef-e111-480d-af3c-fd1ce8872a02"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.358852 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f9aa5eef-e111-480d-af3c-fd1ce8872a02" (UID: "f9aa5eef-e111-480d-af3c-fd1ce8872a02"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.359523 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run" (OuterVolumeSpecName: "var-run") pod "f9aa5eef-e111-480d-af3c-fd1ce8872a02" (UID: "f9aa5eef-e111-480d-af3c-fd1ce8872a02"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.360362 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-scripts" (OuterVolumeSpecName: "scripts") pod "f9aa5eef-e111-480d-af3c-fd1ce8872a02" (UID: "f9aa5eef-e111-480d-af3c-fd1ce8872a02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.389257 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9aa5eef-e111-480d-af3c-fd1ce8872a02-kube-api-access-gxqlf" (OuterVolumeSpecName: "kube-api-access-gxqlf") pod "f9aa5eef-e111-480d-af3c-fd1ce8872a02" (UID: "f9aa5eef-e111-480d-af3c-fd1ce8872a02"). InnerVolumeSpecName "kube-api-access-gxqlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.462359 4825 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.462390 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxqlf\" (UniqueName: \"kubernetes.io/projected/f9aa5eef-e111-480d-af3c-fd1ce8872a02-kube-api-access-gxqlf\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.462404 4825 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.462412 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.462420 4825 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9aa5eef-e111-480d-af3c-fd1ce8872a02-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.462442 4825 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9aa5eef-e111-480d-af3c-fd1ce8872a02-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.478463 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-snszk-config-vxxmw" event={"ID":"f9aa5eef-e111-480d-af3c-fd1ce8872a02","Type":"ContainerDied","Data":"6b899583ad9d65e7ae7ca1037875cc2bc1209228dc66b5e0bbabccb60721010f"} Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.478512 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-snszk-config-vxxmw" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.478517 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b899583ad9d65e7ae7ca1037875cc2bc1209228dc66b5e0bbabccb60721010f" Jan 22 15:43:29 crc kubenswrapper[4825]: I0122 15:43:29.799122 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r6p27"] Jan 22 15:43:29 crc kubenswrapper[4825]: W0122 15:43:29.803748 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcfdefe_f831_469c_9423_6cd4399435a7.slice/crio-77f238a8fe20ec7597c8bba005d39cce5b7a78a3fd00a2f47d9b373d339737a1 WatchSource:0}: Error finding container 77f238a8fe20ec7597c8bba005d39cce5b7a78a3fd00a2f47d9b373d339737a1: Status 404 returned error can't find the container with id 77f238a8fe20ec7597c8bba005d39cce5b7a78a3fd00a2f47d9b373d339737a1 Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.005147 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.531658 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-snszk-config-vxxmw"] Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.567211 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerStarted","Data":"c68cc29d0d009622139b8ae7a5f0fbd380d5eff3052dc3f968eafa7e040a968d"} Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.571801 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-snszk-config-vxxmw"] Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.580110 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6p27" event={"ID":"ffcfdefe-f831-469c-9423-6cd4399435a7","Type":"ContainerStarted","Data":"77f238a8fe20ec7597c8bba005d39cce5b7a78a3fd00a2f47d9b373d339737a1"} Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.862383 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=35.842745726 podStartE2EDuration="1m35.861965502s" podCreationTimestamp="2026-01-22 15:41:55 +0000 UTC" firstStartedPulling="2026-01-22 15:42:29.352116203 +0000 UTC m=+1096.113643113" lastFinishedPulling="2026-01-22 15:43:29.371335979 +0000 UTC m=+1156.132862889" observedRunningTime="2026-01-22 15:43:30.640087578 +0000 UTC m=+1157.401614488" watchObservedRunningTime="2026-01-22 15:43:30.861965502 +0000 UTC m=+1157.623492412" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.874852 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7xb69"] Jan 22 15:43:30 crc kubenswrapper[4825]: E0122 15:43:30.877406 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9aa5eef-e111-480d-af3c-fd1ce8872a02" containerName="ovn-config" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.879335 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9aa5eef-e111-480d-af3c-fd1ce8872a02" containerName="ovn-config" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.879650 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9aa5eef-e111-480d-af3c-fd1ce8872a02" containerName="ovn-config" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.884174 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.891673 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7xb69"] Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.952485 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5c47-account-create-update-qrfz5"] Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.957357 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.982687 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.995525 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9bn\" (UniqueName: \"kubernetes.io/projected/50a6798c-df21-4a31-a652-836868719f0e-kube-api-access-fc9bn\") pod \"cinder-db-create-7xb69\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:30 crc kubenswrapper[4825]: I0122 15:43:30.995667 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50a6798c-df21-4a31-a652-836868719f0e-operator-scripts\") pod \"cinder-db-create-7xb69\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.065206 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5c47-account-create-update-qrfz5"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.100061 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cqc55"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.101393 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1de7e5d-05c7-4b56-9896-41aece1133fe-operator-scripts\") pod \"cinder-5c47-account-create-update-qrfz5\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.101497 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm22\" (UniqueName: \"kubernetes.io/projected/a1de7e5d-05c7-4b56-9896-41aece1133fe-kube-api-access-rkm22\") pod \"cinder-5c47-account-create-update-qrfz5\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.101582 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50a6798c-df21-4a31-a652-836868719f0e-operator-scripts\") pod \"cinder-db-create-7xb69\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.101673 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9bn\" (UniqueName: \"kubernetes.io/projected/50a6798c-df21-4a31-a652-836868719f0e-kube-api-access-fc9bn\") pod \"cinder-db-create-7xb69\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.102594 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50a6798c-df21-4a31-a652-836868719f0e-operator-scripts\") pod \"cinder-db-create-7xb69\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.109739 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.138832 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9bn\" (UniqueName: \"kubernetes.io/projected/50a6798c-df21-4a31-a652-836868719f0e-kube-api-access-fc9bn\") pod \"cinder-db-create-7xb69\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.139720 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cqc55"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.203425 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b7bf140-ffaa-4907-a659-0e00718698e0-operator-scripts\") pod \"barbican-db-create-cqc55\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.203718 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1de7e5d-05c7-4b56-9896-41aece1133fe-operator-scripts\") pod \"cinder-5c47-account-create-update-qrfz5\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.203781 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncfl\" (UniqueName: \"kubernetes.io/projected/8b7bf140-ffaa-4907-a659-0e00718698e0-kube-api-access-wncfl\") pod \"barbican-db-create-cqc55\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.203827 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm22\" (UniqueName: \"kubernetes.io/projected/a1de7e5d-05c7-4b56-9896-41aece1133fe-kube-api-access-rkm22\") pod \"cinder-5c47-account-create-update-qrfz5\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.204647 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1de7e5d-05c7-4b56-9896-41aece1133fe-operator-scripts\") pod \"cinder-5c47-account-create-update-qrfz5\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.226070 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm22\" (UniqueName: \"kubernetes.io/projected/a1de7e5d-05c7-4b56-9896-41aece1133fe-kube-api-access-rkm22\") pod \"cinder-5c47-account-create-update-qrfz5\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.245298 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.287568 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-qjswp"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.288725 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.289299 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.298626 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-qjswp"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.305687 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncfl\" (UniqueName: \"kubernetes.io/projected/8b7bf140-ffaa-4907-a659-0e00718698e0-kube-api-access-wncfl\") pod \"barbican-db-create-cqc55\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.305766 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b7bf140-ffaa-4907-a659-0e00718698e0-operator-scripts\") pod \"barbican-db-create-cqc55\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.306520 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b7bf140-ffaa-4907-a659-0e00718698e0-operator-scripts\") pod \"barbican-db-create-cqc55\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.311059 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5835-account-create-update-ps6lh"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.312692 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.330196 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.351413 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncfl\" (UniqueName: \"kubernetes.io/projected/8b7bf140-ffaa-4907-a659-0e00718698e0-kube-api-access-wncfl\") pod \"barbican-db-create-cqc55\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.370863 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5835-account-create-update-ps6lh"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.397056 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2gv7p"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.398843 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.407836 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwf6\" (UniqueName: \"kubernetes.io/projected/859cf314-a4cb-4952-8445-91d01ee03ca9-kube-api-access-gvwf6\") pod \"cloudkitty-db-create-qjswp\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.407926 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c69ccb-9024-4ce2-bb24-04640babc65c-operator-scripts\") pod \"barbican-5835-account-create-update-ps6lh\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.408109 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/859cf314-a4cb-4952-8445-91d01ee03ca9-operator-scripts\") pod \"cloudkitty-db-create-qjswp\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.408213 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlk6j\" (UniqueName: \"kubernetes.io/projected/66c69ccb-9024-4ce2-bb24-04640babc65c-kube-api-access-qlk6j\") pod \"barbican-5835-account-create-update-ps6lh\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.411242 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.411567 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.411713 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g9wlz" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.420035 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.432049 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2gv7p"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.499829 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wqxdd"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.501410 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509757 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-combined-ca-bundle\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509812 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwf6\" (UniqueName: \"kubernetes.io/projected/859cf314-a4cb-4952-8445-91d01ee03ca9-kube-api-access-gvwf6\") pod \"cloudkitty-db-create-qjswp\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509841 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c69ccb-9024-4ce2-bb24-04640babc65c-operator-scripts\") pod \"barbican-5835-account-create-update-ps6lh\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509863 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-config-data\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509917 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/859cf314-a4cb-4952-8445-91d01ee03ca9-operator-scripts\") pod \"cloudkitty-db-create-qjswp\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509953 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq98r\" (UniqueName: \"kubernetes.io/projected/901e6ba7-980d-4fee-acbd-5aa8314aed8e-kube-api-access-rq98r\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.509970 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlk6j\" (UniqueName: \"kubernetes.io/projected/66c69ccb-9024-4ce2-bb24-04640babc65c-kube-api-access-qlk6j\") pod \"barbican-5835-account-create-update-ps6lh\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.513372 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/859cf314-a4cb-4952-8445-91d01ee03ca9-operator-scripts\") pod \"cloudkitty-db-create-qjswp\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.513415 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.520209 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c69ccb-9024-4ce2-bb24-04640babc65c-operator-scripts\") pod \"barbican-5835-account-create-update-ps6lh\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.542086 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlk6j\" (UniqueName: \"kubernetes.io/projected/66c69ccb-9024-4ce2-bb24-04640babc65c-kube-api-access-qlk6j\") pod \"barbican-5835-account-create-update-ps6lh\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.552536 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwf6\" (UniqueName: \"kubernetes.io/projected/859cf314-a4cb-4952-8445-91d01ee03ca9-kube-api-access-gvwf6\") pod \"cloudkitty-db-create-qjswp\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.560199 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9aa5eef-e111-480d-af3c-fd1ce8872a02" path="/var/lib/kubelet/pods/f9aa5eef-e111-480d-af3c-fd1ce8872a02/volumes" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.561007 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wqxdd"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.616879 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-combined-ca-bundle\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.616994 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-config-data\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.617032 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjx4\" (UniqueName: \"kubernetes.io/projected/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-kube-api-access-jfjx4\") pod \"neutron-db-create-wqxdd\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.617154 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq98r\" (UniqueName: \"kubernetes.io/projected/901e6ba7-980d-4fee-acbd-5aa8314aed8e-kube-api-access-rq98r\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.617273 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-operator-scripts\") pod \"neutron-db-create-wqxdd\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.633687 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-5da4-account-create-update-67kjm"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.634459 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-config-data\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.634598 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-combined-ca-bundle\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.653334 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.666183 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.670383 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-5da4-account-create-update-67kjm"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.672554 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq98r\" (UniqueName: \"kubernetes.io/projected/901e6ba7-980d-4fee-acbd-5aa8314aed8e-kube-api-access-rq98r\") pod \"keystone-db-sync-2gv7p\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.713054 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kdjdn"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.714652 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.719548 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjx4\" (UniqueName: \"kubernetes.io/projected/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-kube-api-access-jfjx4\") pod \"neutron-db-create-wqxdd\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.719658 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b5aaf3-84d2-47b2-8d21-684f354813f8-operator-scripts\") pod \"cloudkitty-5da4-account-create-update-67kjm\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.720290 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62gf9\" (UniqueName: \"kubernetes.io/projected/f9b5aaf3-84d2-47b2-8d21-684f354813f8-kube-api-access-62gf9\") pod \"cloudkitty-5da4-account-create-update-67kjm\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.723129 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.725789 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-operator-scripts\") pod \"neutron-db-create-wqxdd\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.726794 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-operator-scripts\") pod \"neutron-db-create-wqxdd\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.745551 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kdjdn"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.748172 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.778260 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.780655 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjx4\" (UniqueName: \"kubernetes.io/projected/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-kube-api-access-jfjx4\") pod \"neutron-db-create-wqxdd\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.794404 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.824056 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-36b5-account-create-update-bm8dq"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.825783 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.828099 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zh9\" (UniqueName: \"kubernetes.io/projected/eea3602a-5333-49e3-ba9f-041bdd79218f-kube-api-access-v9zh9\") pod \"root-account-create-update-kdjdn\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.828197 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b5aaf3-84d2-47b2-8d21-684f354813f8-operator-scripts\") pod \"cloudkitty-5da4-account-create-update-67kjm\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.828245 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62gf9\" (UniqueName: \"kubernetes.io/projected/f9b5aaf3-84d2-47b2-8d21-684f354813f8-kube-api-access-62gf9\") pod \"cloudkitty-5da4-account-create-update-67kjm\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.828264 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eea3602a-5333-49e3-ba9f-041bdd79218f-operator-scripts\") pod \"root-account-create-update-kdjdn\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.829315 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b5aaf3-84d2-47b2-8d21-684f354813f8-operator-scripts\") pod \"cloudkitty-5da4-account-create-update-67kjm\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.834554 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.835191 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.844795 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-36b5-account-create-update-bm8dq"] Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.880186 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62gf9\" (UniqueName: \"kubernetes.io/projected/f9b5aaf3-84d2-47b2-8d21-684f354813f8-kube-api-access-62gf9\") pod \"cloudkitty-5da4-account-create-update-67kjm\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.930474 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eea3602a-5333-49e3-ba9f-041bdd79218f-operator-scripts\") pod \"root-account-create-update-kdjdn\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.930567 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfs6t\" (UniqueName: \"kubernetes.io/projected/933073b1-0e88-4f40-9c8d-12050b5ccc0a-kube-api-access-jfs6t\") pod \"neutron-36b5-account-create-update-bm8dq\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.930680 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933073b1-0e88-4f40-9c8d-12050b5ccc0a-operator-scripts\") pod \"neutron-36b5-account-create-update-bm8dq\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.930779 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zh9\" (UniqueName: \"kubernetes.io/projected/eea3602a-5333-49e3-ba9f-041bdd79218f-kube-api-access-v9zh9\") pod \"root-account-create-update-kdjdn\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.931292 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eea3602a-5333-49e3-ba9f-041bdd79218f-operator-scripts\") pod \"root-account-create-update-kdjdn\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.950471 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zh9\" (UniqueName: \"kubernetes.io/projected/eea3602a-5333-49e3-ba9f-041bdd79218f-kube-api-access-v9zh9\") pod \"root-account-create-update-kdjdn\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.989164 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:31 crc kubenswrapper[4825]: I0122 15:43:31.993783 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.033240 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfs6t\" (UniqueName: \"kubernetes.io/projected/933073b1-0e88-4f40-9c8d-12050b5ccc0a-kube-api-access-jfs6t\") pod \"neutron-36b5-account-create-update-bm8dq\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.033340 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933073b1-0e88-4f40-9c8d-12050b5ccc0a-operator-scripts\") pod \"neutron-36b5-account-create-update-bm8dq\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.034502 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933073b1-0e88-4f40-9c8d-12050b5ccc0a-operator-scripts\") pod \"neutron-36b5-account-create-update-bm8dq\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.049848 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.064581 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfs6t\" (UniqueName: \"kubernetes.io/projected/933073b1-0e88-4f40-9c8d-12050b5ccc0a-kube-api-access-jfs6t\") pod \"neutron-36b5-account-create-update-bm8dq\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.164509 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:32 crc kubenswrapper[4825]: W0122 15:43:32.164498 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50a6798c_df21_4a31_a652_836868719f0e.slice/crio-14bb68c5bb9f18580166bed5f1ad2996057571cab26f2162b71a81542bc887fc WatchSource:0}: Error finding container 14bb68c5bb9f18580166bed5f1ad2996057571cab26f2162b71a81542bc887fc: Status 404 returned error can't find the container with id 14bb68c5bb9f18580166bed5f1ad2996057571cab26f2162b71a81542bc887fc Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.194450 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7xb69"] Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.520020 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5c47-account-create-update-qrfz5"] Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.539498 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cqc55"] Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.671500 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cqc55" event={"ID":"8b7bf140-ffaa-4907-a659-0e00718698e0","Type":"ContainerStarted","Data":"0c7f4e61b465954b4bc0ed6db813204f413c69eed14d51742e3bfac0f62163ac"} Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.675161 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7xb69" event={"ID":"50a6798c-df21-4a31-a652-836868719f0e","Type":"ContainerStarted","Data":"14bb68c5bb9f18580166bed5f1ad2996057571cab26f2162b71a81542bc887fc"} Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.679313 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c47-account-create-update-qrfz5" event={"ID":"a1de7e5d-05c7-4b56-9896-41aece1133fe","Type":"ContainerStarted","Data":"6628d05da5675a46790299959ff6ee74223158153f0f8e1bf96613c0f293e363"} Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.931662 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wqxdd"] Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.971292 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2gv7p"] Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.978770 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5835-account-create-update-ps6lh"] Jan 22 15:43:32 crc kubenswrapper[4825]: I0122 15:43:32.985519 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-qjswp"] Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.055203 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-5da4-account-create-update-67kjm"] Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.117337 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kdjdn"] Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.253243 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-36b5-account-create-update-bm8dq"] Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.720073 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-qjswp" event={"ID":"859cf314-a4cb-4952-8445-91d01ee03ca9","Type":"ContainerStarted","Data":"61d047e891b3c6f29d81bb5ba7fce44363cdd08cf969a55400a43bebd170c34f"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.739623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c47-account-create-update-qrfz5" event={"ID":"a1de7e5d-05c7-4b56-9896-41aece1133fe","Type":"ContainerStarted","Data":"5394e21bb385486cb59d0d6353285a35001bf791a5d3bc46ed811fb2f4a65735"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.747362 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" event={"ID":"f9b5aaf3-84d2-47b2-8d21-684f354813f8","Type":"ContainerStarted","Data":"52ac224ae430723ff554f9dbc85a70975d56e451fe5ceccb6b84db2babfc1c54"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.772859 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-36b5-account-create-update-bm8dq" event={"ID":"933073b1-0e88-4f40-9c8d-12050b5ccc0a","Type":"ContainerStarted","Data":"101f9f98b18c76b7b37eaf266b4f8c81a35db2a3dedf393a7e93e8444b1846b7"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.784084 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2gv7p" event={"ID":"901e6ba7-980d-4fee-acbd-5aa8314aed8e","Type":"ContainerStarted","Data":"d936de390b97127a8cdce73f800d54ce9edfcfa18df49b2b5899d907652ee8d0"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.791844 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5835-account-create-update-ps6lh" event={"ID":"66c69ccb-9024-4ce2-bb24-04640babc65c","Type":"ContainerStarted","Data":"bba517b483085bf7940c4b2e59666f44da8336030dbb00a806005963def83d54"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.826076 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kdjdn" event={"ID":"eea3602a-5333-49e3-ba9f-041bdd79218f","Type":"ContainerStarted","Data":"f2456148ce355282d19d38b2da8d928d687f63582574f6f3a90d132a0fabf357"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.826334 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cqc55" event={"ID":"8b7bf140-ffaa-4907-a659-0e00718698e0","Type":"ContainerStarted","Data":"4c1f0153b65bc5175c312c4cd719e4fe55c8b1161aecc54033ef59e240dc7ac3"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.849784 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wqxdd" event={"ID":"5d719b91-8720-4e1c-9be9-a94cf9f3b15c","Type":"ContainerStarted","Data":"c7e6649f64e637fce0f75a6b9e896c0997d641a5d1ed7c892bda4b969dc8fb10"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.849836 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wqxdd" event={"ID":"5d719b91-8720-4e1c-9be9-a94cf9f3b15c","Type":"ContainerStarted","Data":"3fdc4eea446c3e9d0f3c7d71064ff83dd08ed2d78b3f90c12dd834b8928e3017"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.863750 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" podStartSLOduration=2.863724455 podStartE2EDuration="2.863724455s" podCreationTimestamp="2026-01-22 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:33.78522677 +0000 UTC m=+1160.546753680" watchObservedRunningTime="2026-01-22 15:43:33.863724455 +0000 UTC m=+1160.625251365" Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.874686 4825 generic.go:334] "Generic (PLEG): container finished" podID="50a6798c-df21-4a31-a652-836868719f0e" containerID="a6f65147ae431a301e01665f4fa6719ac9c36404487cc4fbfc7dce8a36239a79" exitCode=0 Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.874732 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7xb69" event={"ID":"50a6798c-df21-4a31-a652-836868719f0e","Type":"ContainerDied","Data":"a6f65147ae431a301e01665f4fa6719ac9c36404487cc4fbfc7dce8a36239a79"} Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.889336 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-36b5-account-create-update-bm8dq" podStartSLOduration=2.889310827 podStartE2EDuration="2.889310827s" podCreationTimestamp="2026-01-22 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:33.798115609 +0000 UTC m=+1160.559642519" watchObservedRunningTime="2026-01-22 15:43:33.889310827 +0000 UTC m=+1160.650837737" Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.908039 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5835-account-create-update-ps6lh" podStartSLOduration=2.908012791 podStartE2EDuration="2.908012791s" podCreationTimestamp="2026-01-22 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:33.817302787 +0000 UTC m=+1160.578829707" watchObservedRunningTime="2026-01-22 15:43:33.908012791 +0000 UTC m=+1160.669539701" Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.925900 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kdjdn" podStartSLOduration=2.925875362 podStartE2EDuration="2.925875362s" podCreationTimestamp="2026-01-22 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:33.846493612 +0000 UTC m=+1160.608020522" watchObservedRunningTime="2026-01-22 15:43:33.925875362 +0000 UTC m=+1160.687402272" Jan 22 15:43:33 crc kubenswrapper[4825]: I0122 15:43:33.962314 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-wqxdd" podStartSLOduration=2.962292053 podStartE2EDuration="2.962292053s" podCreationTimestamp="2026-01-22 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:43:33.921529928 +0000 UTC m=+1160.683056848" watchObservedRunningTime="2026-01-22 15:43:33.962292053 +0000 UTC m=+1160.723818963" Jan 22 15:43:34 crc kubenswrapper[4825]: E0122 15:43:34.515140 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod933073b1_0e88_4f40_9c8d_12050b5ccc0a.slice/crio-conmon-dc93ad5f17a6a9e656efe285f923d56a228e07122a965de1d95fcd0c5d62206c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66c69ccb_9024_4ce2_bb24_04640babc65c.slice/crio-conmon-29ece37c41db9dbe8d417e698cd0ba7d05e1cf94cfc76b185ee87012c12104e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod933073b1_0e88_4f40_9c8d_12050b5ccc0a.slice/crio-dc93ad5f17a6a9e656efe285f923d56a228e07122a965de1d95fcd0c5d62206c.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.889533 4825 generic.go:334] "Generic (PLEG): container finished" podID="5d719b91-8720-4e1c-9be9-a94cf9f3b15c" containerID="c7e6649f64e637fce0f75a6b9e896c0997d641a5d1ed7c892bda4b969dc8fb10" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.889876 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wqxdd" event={"ID":"5d719b91-8720-4e1c-9be9-a94cf9f3b15c","Type":"ContainerDied","Data":"c7e6649f64e637fce0f75a6b9e896c0997d641a5d1ed7c892bda4b969dc8fb10"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.893832 4825 generic.go:334] "Generic (PLEG): container finished" podID="f9b5aaf3-84d2-47b2-8d21-684f354813f8" containerID="e48b4f16f89f942f656c4a1757c78d7c8be3f12f2171c66bdf76485e428d5837" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.894079 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" event={"ID":"f9b5aaf3-84d2-47b2-8d21-684f354813f8","Type":"ContainerDied","Data":"e48b4f16f89f942f656c4a1757c78d7c8be3f12f2171c66bdf76485e428d5837"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.897891 4825 generic.go:334] "Generic (PLEG): container finished" podID="933073b1-0e88-4f40-9c8d-12050b5ccc0a" containerID="dc93ad5f17a6a9e656efe285f923d56a228e07122a965de1d95fcd0c5d62206c" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.897999 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-36b5-account-create-update-bm8dq" event={"ID":"933073b1-0e88-4f40-9c8d-12050b5ccc0a","Type":"ContainerDied","Data":"dc93ad5f17a6a9e656efe285f923d56a228e07122a965de1d95fcd0c5d62206c"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.901440 4825 generic.go:334] "Generic (PLEG): container finished" podID="859cf314-a4cb-4952-8445-91d01ee03ca9" containerID="b8658e534f63c9619e756afd71f9871aad8589a5a2c630e250da84bb07b23c31" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.901523 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-qjswp" event={"ID":"859cf314-a4cb-4952-8445-91d01ee03ca9","Type":"ContainerDied","Data":"b8658e534f63c9619e756afd71f9871aad8589a5a2c630e250da84bb07b23c31"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.908808 4825 generic.go:334] "Generic (PLEG): container finished" podID="66c69ccb-9024-4ce2-bb24-04640babc65c" containerID="29ece37c41db9dbe8d417e698cd0ba7d05e1cf94cfc76b185ee87012c12104e5" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.908887 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5835-account-create-update-ps6lh" event={"ID":"66c69ccb-9024-4ce2-bb24-04640babc65c","Type":"ContainerDied","Data":"29ece37c41db9dbe8d417e698cd0ba7d05e1cf94cfc76b185ee87012c12104e5"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.910760 4825 generic.go:334] "Generic (PLEG): container finished" podID="a1de7e5d-05c7-4b56-9896-41aece1133fe" containerID="5394e21bb385486cb59d0d6353285a35001bf791a5d3bc46ed811fb2f4a65735" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.910806 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c47-account-create-update-qrfz5" event={"ID":"a1de7e5d-05c7-4b56-9896-41aece1133fe","Type":"ContainerDied","Data":"5394e21bb385486cb59d0d6353285a35001bf791a5d3bc46ed811fb2f4a65735"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.912769 4825 generic.go:334] "Generic (PLEG): container finished" podID="eea3602a-5333-49e3-ba9f-041bdd79218f" containerID="7b6ca5ed145bea05108f82644894e5ea2d1ddde556c8bf3c9b95f4fa7bb5305b" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.912836 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kdjdn" event={"ID":"eea3602a-5333-49e3-ba9f-041bdd79218f","Type":"ContainerDied","Data":"7b6ca5ed145bea05108f82644894e5ea2d1ddde556c8bf3c9b95f4fa7bb5305b"} Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.914947 4825 generic.go:334] "Generic (PLEG): container finished" podID="8b7bf140-ffaa-4907-a659-0e00718698e0" containerID="4c1f0153b65bc5175c312c4cd719e4fe55c8b1161aecc54033ef59e240dc7ac3" exitCode=0 Jan 22 15:43:34 crc kubenswrapper[4825]: I0122 15:43:34.915081 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cqc55" event={"ID":"8b7bf140-ffaa-4907-a659-0e00718698e0","Type":"ContainerDied","Data":"4c1f0153b65bc5175c312c4cd719e4fe55c8b1161aecc54033ef59e240dc7ac3"} Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.543970 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.544031 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.637465 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.647635 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.665528 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.749503 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1de7e5d-05c7-4b56-9896-41aece1133fe-operator-scripts\") pod \"a1de7e5d-05c7-4b56-9896-41aece1133fe\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.749593 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wncfl\" (UniqueName: \"kubernetes.io/projected/8b7bf140-ffaa-4907-a659-0e00718698e0-kube-api-access-wncfl\") pod \"8b7bf140-ffaa-4907-a659-0e00718698e0\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.749688 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkm22\" (UniqueName: \"kubernetes.io/projected/a1de7e5d-05c7-4b56-9896-41aece1133fe-kube-api-access-rkm22\") pod \"a1de7e5d-05c7-4b56-9896-41aece1133fe\" (UID: \"a1de7e5d-05c7-4b56-9896-41aece1133fe\") " Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.749708 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc9bn\" (UniqueName: \"kubernetes.io/projected/50a6798c-df21-4a31-a652-836868719f0e-kube-api-access-fc9bn\") pod \"50a6798c-df21-4a31-a652-836868719f0e\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.749724 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b7bf140-ffaa-4907-a659-0e00718698e0-operator-scripts\") pod \"8b7bf140-ffaa-4907-a659-0e00718698e0\" (UID: \"8b7bf140-ffaa-4907-a659-0e00718698e0\") " Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.749754 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50a6798c-df21-4a31-a652-836868719f0e-operator-scripts\") pod \"50a6798c-df21-4a31-a652-836868719f0e\" (UID: \"50a6798c-df21-4a31-a652-836868719f0e\") " Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.750588 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1de7e5d-05c7-4b56-9896-41aece1133fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1de7e5d-05c7-4b56-9896-41aece1133fe" (UID: "a1de7e5d-05c7-4b56-9896-41aece1133fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.750804 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a6798c-df21-4a31-a652-836868719f0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50a6798c-df21-4a31-a652-836868719f0e" (UID: "50a6798c-df21-4a31-a652-836868719f0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.751362 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7bf140-ffaa-4907-a659-0e00718698e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b7bf140-ffaa-4907-a659-0e00718698e0" (UID: "8b7bf140-ffaa-4907-a659-0e00718698e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.756544 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a6798c-df21-4a31-a652-836868719f0e-kube-api-access-fc9bn" (OuterVolumeSpecName: "kube-api-access-fc9bn") pod "50a6798c-df21-4a31-a652-836868719f0e" (UID: "50a6798c-df21-4a31-a652-836868719f0e"). InnerVolumeSpecName "kube-api-access-fc9bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.757132 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7bf140-ffaa-4907-a659-0e00718698e0-kube-api-access-wncfl" (OuterVolumeSpecName: "kube-api-access-wncfl") pod "8b7bf140-ffaa-4907-a659-0e00718698e0" (UID: "8b7bf140-ffaa-4907-a659-0e00718698e0"). InnerVolumeSpecName "kube-api-access-wncfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.771419 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1de7e5d-05c7-4b56-9896-41aece1133fe-kube-api-access-rkm22" (OuterVolumeSpecName: "kube-api-access-rkm22") pod "a1de7e5d-05c7-4b56-9896-41aece1133fe" (UID: "a1de7e5d-05c7-4b56-9896-41aece1133fe"). InnerVolumeSpecName "kube-api-access-rkm22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.873471 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1de7e5d-05c7-4b56-9896-41aece1133fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.873521 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wncfl\" (UniqueName: \"kubernetes.io/projected/8b7bf140-ffaa-4907-a659-0e00718698e0-kube-api-access-wncfl\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.873537 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkm22\" (UniqueName: \"kubernetes.io/projected/a1de7e5d-05c7-4b56-9896-41aece1133fe-kube-api-access-rkm22\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.873548 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc9bn\" (UniqueName: \"kubernetes.io/projected/50a6798c-df21-4a31-a652-836868719f0e-kube-api-access-fc9bn\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.873560 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b7bf140-ffaa-4907-a659-0e00718698e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.873572 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50a6798c-df21-4a31-a652-836868719f0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.963758 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c47-account-create-update-qrfz5" event={"ID":"a1de7e5d-05c7-4b56-9896-41aece1133fe","Type":"ContainerDied","Data":"6628d05da5675a46790299959ff6ee74223158153f0f8e1bf96613c0f293e363"} Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.963803 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6628d05da5675a46790299959ff6ee74223158153f0f8e1bf96613c0f293e363" Jan 22 15:43:35 crc kubenswrapper[4825]: I0122 15:43:35.963913 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c47-account-create-update-qrfz5" Jan 22 15:43:36 crc kubenswrapper[4825]: I0122 15:43:36.007340 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cqc55" event={"ID":"8b7bf140-ffaa-4907-a659-0e00718698e0","Type":"ContainerDied","Data":"0c7f4e61b465954b4bc0ed6db813204f413c69eed14d51742e3bfac0f62163ac"} Jan 22 15:43:36 crc kubenswrapper[4825]: I0122 15:43:36.007382 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c7f4e61b465954b4bc0ed6db813204f413c69eed14d51742e3bfac0f62163ac" Jan 22 15:43:36 crc kubenswrapper[4825]: I0122 15:43:36.007485 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cqc55" Jan 22 15:43:36 crc kubenswrapper[4825]: I0122 15:43:36.029816 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7xb69" event={"ID":"50a6798c-df21-4a31-a652-836868719f0e","Type":"ContainerDied","Data":"14bb68c5bb9f18580166bed5f1ad2996057571cab26f2162b71a81542bc887fc"} Jan 22 15:43:36 crc kubenswrapper[4825]: I0122 15:43:36.029855 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bb68c5bb9f18580166bed5f1ad2996057571cab26f2162b71a81542bc887fc" Jan 22 15:43:36 crc kubenswrapper[4825]: I0122 15:43:36.030119 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7xb69" Jan 22 15:43:38 crc kubenswrapper[4825]: I0122 15:43:38.052170 4825 generic.go:334] "Generic (PLEG): container finished" podID="4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" containerID="af2ef60351ba57d283e16b63c9a5973a736b0cd3ff72175973d2bf2b859820b4" exitCode=0 Jan 22 15:43:38 crc kubenswrapper[4825]: I0122 15:43:38.052270 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z6dh6" event={"ID":"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2","Type":"ContainerDied","Data":"af2ef60351ba57d283e16b63c9a5973a736b0cd3ff72175973d2bf2b859820b4"} Jan 22 15:43:38 crc kubenswrapper[4825]: I0122 15:43:38.993379 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 22 15:43:39 crc kubenswrapper[4825]: I0122 15:43:39.703918 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:39 crc kubenswrapper[4825]: I0122 15:43:39.716283 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62f00afd-c39a-409f-ba5e-b5474959717b-etc-swift\") pod \"swift-storage-0\" (UID: \"62f00afd-c39a-409f-ba5e-b5474959717b\") " pod="openstack/swift-storage-0" Jan 22 15:43:39 crc kubenswrapper[4825]: I0122 15:43:39.763206 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.327954 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.331175 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.340789 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.357920 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.374684 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.396523 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.413282 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.419496 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933073b1-0e88-4f40-9c8d-12050b5ccc0a-operator-scripts\") pod \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.419938 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfs6t\" (UniqueName: \"kubernetes.io/projected/933073b1-0e88-4f40-9c8d-12050b5ccc0a-kube-api-access-jfs6t\") pod \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\" (UID: \"933073b1-0e88-4f40-9c8d-12050b5ccc0a\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.420116 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c69ccb-9024-4ce2-bb24-04640babc65c-operator-scripts\") pod \"66c69ccb-9024-4ce2-bb24-04640babc65c\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.420194 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933073b1-0e88-4f40-9c8d-12050b5ccc0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "933073b1-0e88-4f40-9c8d-12050b5ccc0a" (UID: "933073b1-0e88-4f40-9c8d-12050b5ccc0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.420822 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c69ccb-9024-4ce2-bb24-04640babc65c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66c69ccb-9024-4ce2-bb24-04640babc65c" (UID: "66c69ccb-9024-4ce2-bb24-04640babc65c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.420971 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlk6j\" (UniqueName: \"kubernetes.io/projected/66c69ccb-9024-4ce2-bb24-04640babc65c-kube-api-access-qlk6j\") pod \"66c69ccb-9024-4ce2-bb24-04640babc65c\" (UID: \"66c69ccb-9024-4ce2-bb24-04640babc65c\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.422060 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933073b1-0e88-4f40-9c8d-12050b5ccc0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.422187 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c69ccb-9024-4ce2-bb24-04640babc65c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.427280 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c69ccb-9024-4ce2-bb24-04640babc65c-kube-api-access-qlk6j" (OuterVolumeSpecName: "kube-api-access-qlk6j") pod "66c69ccb-9024-4ce2-bb24-04640babc65c" (UID: "66c69ccb-9024-4ce2-bb24-04640babc65c"). InnerVolumeSpecName "kube-api-access-qlk6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.427526 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933073b1-0e88-4f40-9c8d-12050b5ccc0a-kube-api-access-jfs6t" (OuterVolumeSpecName: "kube-api-access-jfs6t") pod "933073b1-0e88-4f40-9c8d-12050b5ccc0a" (UID: "933073b1-0e88-4f40-9c8d-12050b5ccc0a"). InnerVolumeSpecName "kube-api-access-jfs6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523123 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spv4q\" (UniqueName: \"kubernetes.io/projected/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-kube-api-access-spv4q\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523180 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-combined-ca-bundle\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523248 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62gf9\" (UniqueName: \"kubernetes.io/projected/f9b5aaf3-84d2-47b2-8d21-684f354813f8-kube-api-access-62gf9\") pod \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523349 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9zh9\" (UniqueName: \"kubernetes.io/projected/eea3602a-5333-49e3-ba9f-041bdd79218f-kube-api-access-v9zh9\") pod \"eea3602a-5333-49e3-ba9f-041bdd79218f\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523378 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-etc-swift\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523408 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-ring-data-devices\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523437 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b5aaf3-84d2-47b2-8d21-684f354813f8-operator-scripts\") pod \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\" (UID: \"f9b5aaf3-84d2-47b2-8d21-684f354813f8\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523494 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-operator-scripts\") pod \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523532 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-swiftconf\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523576 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-scripts\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523639 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvwf6\" (UniqueName: \"kubernetes.io/projected/859cf314-a4cb-4952-8445-91d01ee03ca9-kube-api-access-gvwf6\") pod \"859cf314-a4cb-4952-8445-91d01ee03ca9\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523699 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-dispersionconf\") pod \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\" (UID: \"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523722 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eea3602a-5333-49e3-ba9f-041bdd79218f-operator-scripts\") pod \"eea3602a-5333-49e3-ba9f-041bdd79218f\" (UID: \"eea3602a-5333-49e3-ba9f-041bdd79218f\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523773 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjx4\" (UniqueName: \"kubernetes.io/projected/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-kube-api-access-jfjx4\") pod \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\" (UID: \"5d719b91-8720-4e1c-9be9-a94cf9f3b15c\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.523798 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/859cf314-a4cb-4952-8445-91d01ee03ca9-operator-scripts\") pod \"859cf314-a4cb-4952-8445-91d01ee03ca9\" (UID: \"859cf314-a4cb-4952-8445-91d01ee03ca9\") " Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.524288 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfs6t\" (UniqueName: \"kubernetes.io/projected/933073b1-0e88-4f40-9c8d-12050b5ccc0a-kube-api-access-jfs6t\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.524307 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlk6j\" (UniqueName: \"kubernetes.io/projected/66c69ccb-9024-4ce2-bb24-04640babc65c-kube-api-access-qlk6j\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.525413 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859cf314-a4cb-4952-8445-91d01ee03ca9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "859cf314-a4cb-4952-8445-91d01ee03ca9" (UID: "859cf314-a4cb-4952-8445-91d01ee03ca9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.526399 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-kube-api-access-spv4q" (OuterVolumeSpecName: "kube-api-access-spv4q") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "kube-api-access-spv4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.526586 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.527698 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.528111 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9b5aaf3-84d2-47b2-8d21-684f354813f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9b5aaf3-84d2-47b2-8d21-684f354813f8" (UID: "f9b5aaf3-84d2-47b2-8d21-684f354813f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.529791 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d719b91-8720-4e1c-9be9-a94cf9f3b15c" (UID: "5d719b91-8720-4e1c-9be9-a94cf9f3b15c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.530424 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea3602a-5333-49e3-ba9f-041bdd79218f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eea3602a-5333-49e3-ba9f-041bdd79218f" (UID: "eea3602a-5333-49e3-ba9f-041bdd79218f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.530914 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea3602a-5333-49e3-ba9f-041bdd79218f-kube-api-access-v9zh9" (OuterVolumeSpecName: "kube-api-access-v9zh9") pod "eea3602a-5333-49e3-ba9f-041bdd79218f" (UID: "eea3602a-5333-49e3-ba9f-041bdd79218f"). InnerVolumeSpecName "kube-api-access-v9zh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.531327 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859cf314-a4cb-4952-8445-91d01ee03ca9-kube-api-access-gvwf6" (OuterVolumeSpecName: "kube-api-access-gvwf6") pod "859cf314-a4cb-4952-8445-91d01ee03ca9" (UID: "859cf314-a4cb-4952-8445-91d01ee03ca9"). InnerVolumeSpecName "kube-api-access-gvwf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.534680 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b5aaf3-84d2-47b2-8d21-684f354813f8-kube-api-access-62gf9" (OuterVolumeSpecName: "kube-api-access-62gf9") pod "f9b5aaf3-84d2-47b2-8d21-684f354813f8" (UID: "f9b5aaf3-84d2-47b2-8d21-684f354813f8"). InnerVolumeSpecName "kube-api-access-62gf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.535422 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-kube-api-access-jfjx4" (OuterVolumeSpecName: "kube-api-access-jfjx4") pod "5d719b91-8720-4e1c-9be9-a94cf9f3b15c" (UID: "5d719b91-8720-4e1c-9be9-a94cf9f3b15c"). InnerVolumeSpecName "kube-api-access-jfjx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.537473 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.555253 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-scripts" (OuterVolumeSpecName: "scripts") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.561952 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.574958 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" (UID: "4cc9ba42-f6cd-48ac-b240-d2d764abe4a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627733 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spv4q\" (UniqueName: \"kubernetes.io/projected/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-kube-api-access-spv4q\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627773 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627785 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62gf9\" (UniqueName: \"kubernetes.io/projected/f9b5aaf3-84d2-47b2-8d21-684f354813f8-kube-api-access-62gf9\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627796 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9zh9\" (UniqueName: \"kubernetes.io/projected/eea3602a-5333-49e3-ba9f-041bdd79218f-kube-api-access-v9zh9\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627807 4825 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627818 4825 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627828 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b5aaf3-84d2-47b2-8d21-684f354813f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627838 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627851 4825 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627861 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627873 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvwf6\" (UniqueName: \"kubernetes.io/projected/859cf314-a4cb-4952-8445-91d01ee03ca9-kube-api-access-gvwf6\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627883 4825 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4cc9ba42-f6cd-48ac-b240-d2d764abe4a2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627893 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eea3602a-5333-49e3-ba9f-041bdd79218f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627904 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjx4\" (UniqueName: \"kubernetes.io/projected/5d719b91-8720-4e1c-9be9-a94cf9f3b15c-kube-api-access-jfjx4\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.627914 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/859cf314-a4cb-4952-8445-91d01ee03ca9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:40 crc kubenswrapper[4825]: I0122 15:43:40.756190 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.105725 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kdjdn" event={"ID":"eea3602a-5333-49e3-ba9f-041bdd79218f","Type":"ContainerDied","Data":"f2456148ce355282d19d38b2da8d928d687f63582574f6f3a90d132a0fabf357"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.105793 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2456148ce355282d19d38b2da8d928d687f63582574f6f3a90d132a0fabf357" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.105884 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdjdn" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.110314 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wqxdd" event={"ID":"5d719b91-8720-4e1c-9be9-a94cf9f3b15c","Type":"ContainerDied","Data":"3fdc4eea446c3e9d0f3c7d71064ff83dd08ed2d78b3f90c12dd834b8928e3017"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.110347 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdc4eea446c3e9d0f3c7d71064ff83dd08ed2d78b3f90c12dd834b8928e3017" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.110408 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wqxdd" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.112407 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.112438 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5da4-account-create-update-67kjm" event={"ID":"f9b5aaf3-84d2-47b2-8d21-684f354813f8","Type":"ContainerDied","Data":"52ac224ae430723ff554f9dbc85a70975d56e451fe5ceccb6b84db2babfc1c54"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.112508 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52ac224ae430723ff554f9dbc85a70975d56e451fe5ceccb6b84db2babfc1c54" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.119358 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-36b5-account-create-update-bm8dq" event={"ID":"933073b1-0e88-4f40-9c8d-12050b5ccc0a","Type":"ContainerDied","Data":"101f9f98b18c76b7b37eaf266b4f8c81a35db2a3dedf393a7e93e8444b1846b7"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.119386 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101f9f98b18c76b7b37eaf266b4f8c81a35db2a3dedf393a7e93e8444b1846b7" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.119480 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36b5-account-create-update-bm8dq" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.123203 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-qjswp" event={"ID":"859cf314-a4cb-4952-8445-91d01ee03ca9","Type":"ContainerDied","Data":"61d047e891b3c6f29d81bb5ba7fce44363cdd08cf969a55400a43bebd170c34f"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.123244 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d047e891b3c6f29d81bb5ba7fce44363cdd08cf969a55400a43bebd170c34f" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.123261 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qjswp" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.125806 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z6dh6" event={"ID":"4cc9ba42-f6cd-48ac-b240-d2d764abe4a2","Type":"ContainerDied","Data":"3dbef76531505da97891058ecad8fc461cc3722b251b94792a9665d5a77b0245"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.125864 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dbef76531505da97891058ecad8fc461cc3722b251b94792a9665d5a77b0245" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.125935 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z6dh6" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.131670 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5835-account-create-update-ps6lh" event={"ID":"66c69ccb-9024-4ce2-bb24-04640babc65c","Type":"ContainerDied","Data":"bba517b483085bf7940c4b2e59666f44da8336030dbb00a806005963def83d54"} Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.131705 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba517b483085bf7940c4b2e59666f44da8336030dbb00a806005963def83d54" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.131733 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5835-account-create-update-ps6lh" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.994042 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:41 crc kubenswrapper[4825]: I0122 15:43:41.996844 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:42 crc kubenswrapper[4825]: I0122 15:43:42.252322 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:44 crc kubenswrapper[4825]: I0122 15:43:44.878628 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:43:44 crc kubenswrapper[4825]: I0122 15:43:44.890646 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="prometheus" containerID="cri-o://5dc8bcc6d2882d133f792e5ea4a3405374deb065b251034dd6dfdf8a7a565469" gracePeriod=600 Jan 22 15:43:44 crc kubenswrapper[4825]: I0122 15:43:44.891040 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="thanos-sidecar" containerID="cri-o://c68cc29d0d009622139b8ae7a5f0fbd380d5eff3052dc3f968eafa7e040a968d" gracePeriod=600 Jan 22 15:43:44 crc kubenswrapper[4825]: I0122 15:43:44.891015 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="config-reloader" containerID="cri-o://10764b40f8f1de47f6ef2730e3a37d2f2353dc3474ddc2e67965b22003273634" gracePeriod=600 Jan 22 15:43:45 crc kubenswrapper[4825]: I0122 15:43:45.285896 4825 generic.go:334] "Generic (PLEG): container finished" podID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerID="c68cc29d0d009622139b8ae7a5f0fbd380d5eff3052dc3f968eafa7e040a968d" exitCode=0 Jan 22 15:43:45 crc kubenswrapper[4825]: I0122 15:43:45.286263 4825 generic.go:334] "Generic (PLEG): container finished" podID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerID="10764b40f8f1de47f6ef2730e3a37d2f2353dc3474ddc2e67965b22003273634" exitCode=0 Jan 22 15:43:45 crc kubenswrapper[4825]: I0122 15:43:45.286279 4825 generic.go:334] "Generic (PLEG): container finished" podID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerID="5dc8bcc6d2882d133f792e5ea4a3405374deb065b251034dd6dfdf8a7a565469" exitCode=0 Jan 22 15:43:45 crc kubenswrapper[4825]: I0122 15:43:45.285936 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerDied","Data":"c68cc29d0d009622139b8ae7a5f0fbd380d5eff3052dc3f968eafa7e040a968d"} Jan 22 15:43:45 crc kubenswrapper[4825]: I0122 15:43:45.286326 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerDied","Data":"10764b40f8f1de47f6ef2730e3a37d2f2353dc3474ddc2e67965b22003273634"} Jan 22 15:43:45 crc kubenswrapper[4825]: I0122 15:43:45.286348 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerDied","Data":"5dc8bcc6d2882d133f792e5ea4a3405374deb065b251034dd6dfdf8a7a565469"} Jan 22 15:43:46 crc kubenswrapper[4825]: I0122 15:43:46.995018 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.119:9090/-/ready\": dial tcp 10.217.0.119:9090: connect: connection refused" Jan 22 15:43:50 crc kubenswrapper[4825]: W0122 15:43:50.538326 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f00afd_c39a_409f_ba5e_b5474959717b.slice/crio-6cf79d9deff1d0e0986f454cec7013b9f54735dc429522f77ccee0ead2b6d4ed WatchSource:0}: Error finding container 6cf79d9deff1d0e0986f454cec7013b9f54735dc429522f77ccee0ead2b6d4ed: Status 404 returned error can't find the container with id 6cf79d9deff1d0e0986f454cec7013b9f54735dc429522f77ccee0ead2b6d4ed Jan 22 15:43:50 crc kubenswrapper[4825]: E0122 15:43:50.593482 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 22 15:43:50 crc kubenswrapper[4825]: E0122 15:43:50.594201 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df7p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-r6p27_openstack(ffcfdefe-f831-469c-9423-6cd4399435a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:43:50 crc kubenswrapper[4825]: E0122 15:43:50.595513 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-r6p27" podUID="ffcfdefe-f831-469c-9423-6cd4399435a7" Jan 22 15:43:50 crc kubenswrapper[4825]: I0122 15:43:50.989530 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.186291 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config-out\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.186430 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-0\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.186785 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-web-config\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187471 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-1\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187593 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-2\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187721 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187829 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-tls-assets\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187931 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-thanos-prometheus-http-client-file\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187128 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.187742 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.188012 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.188330 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.188441 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhp4r\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-kube-api-access-lhp4r\") pod \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\" (UID: \"96e4c8b8-1127-4f5a-9fe7-51f8b2478388\") " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.189116 4825 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.189219 4825 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.189304 4825 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.197487 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config-out" (OuterVolumeSpecName: "config-out") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.197641 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-kube-api-access-lhp4r" (OuterVolumeSpecName: "kube-api-access-lhp4r") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "kube-api-access-lhp4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.197760 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.202220 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.205074 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config" (OuterVolumeSpecName: "config") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.223772 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-web-config" (OuterVolumeSpecName: "web-config") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.235440 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "96e4c8b8-1127-4f5a-9fe7-51f8b2478388" (UID: "96e4c8b8-1127-4f5a-9fe7-51f8b2478388"). InnerVolumeSpecName "pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291205 4825 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291238 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291248 4825 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291257 4825 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291317 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") on node \"crc\" " Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291331 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhp4r\" (UniqueName: \"kubernetes.io/projected/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-kube-api-access-lhp4r\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.291342 4825 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96e4c8b8-1127-4f5a-9fe7-51f8b2478388-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.314425 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.314776 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7") on node "crc" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.373353 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96e4c8b8-1127-4f5a-9fe7-51f8b2478388","Type":"ContainerDied","Data":"6b5a9f77e73d9348fe038ef19f8e7928d862a2b42ed704732abe21dffd8f71f7"} Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.373584 4825 scope.go:117] "RemoveContainer" containerID="c68cc29d0d009622139b8ae7a5f0fbd380d5eff3052dc3f968eafa7e040a968d" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.373704 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.377127 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2gv7p" event={"ID":"901e6ba7-980d-4fee-acbd-5aa8314aed8e","Type":"ContainerStarted","Data":"a7f7a7a114493dfdb4ad180cfca393e175a61a2114ffb28e6e67efd950d9dbf7"} Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.386493 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"6cf79d9deff1d0e0986f454cec7013b9f54735dc429522f77ccee0ead2b6d4ed"} Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.398709 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.402473 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2gv7p" podStartSLOduration=13.171769515 podStartE2EDuration="20.40245434s" podCreationTimestamp="2026-01-22 15:43:31 +0000 UTC" firstStartedPulling="2026-01-22 15:43:32.983706481 +0000 UTC m=+1159.745233391" lastFinishedPulling="2026-01-22 15:43:40.214391316 +0000 UTC m=+1166.975918216" observedRunningTime="2026-01-22 15:43:51.394579764 +0000 UTC m=+1178.156106674" watchObservedRunningTime="2026-01-22 15:43:51.40245434 +0000 UTC m=+1178.163981250" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.402656 4825 scope.go:117] "RemoveContainer" containerID="10764b40f8f1de47f6ef2730e3a37d2f2353dc3474ddc2e67965b22003273634" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.402878 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-r6p27" podUID="ffcfdefe-f831-469c-9423-6cd4399435a7" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.447746 4825 scope.go:117] "RemoveContainer" containerID="5dc8bcc6d2882d133f792e5ea4a3405374deb065b251034dd6dfdf8a7a565469" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.462061 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.484108 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.490786 4825 scope.go:117] "RemoveContainer" containerID="cdef4fcb8164826595dbc00ba1067188d2efc3d80919f2af6b91f912c22490c1" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.496924 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497558 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="thanos-sidecar" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497583 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="thanos-sidecar" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497603 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d719b91-8720-4e1c-9be9-a94cf9f3b15c" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497612 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d719b91-8720-4e1c-9be9-a94cf9f3b15c" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497642 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a6798c-df21-4a31-a652-836868719f0e" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497651 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a6798c-df21-4a31-a652-836868719f0e" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497665 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933073b1-0e88-4f40-9c8d-12050b5ccc0a" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497673 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="933073b1-0e88-4f40-9c8d-12050b5ccc0a" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497683 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea3602a-5333-49e3-ba9f-041bdd79218f" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497691 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea3602a-5333-49e3-ba9f-041bdd79218f" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497705 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c69ccb-9024-4ce2-bb24-04640babc65c" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497713 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c69ccb-9024-4ce2-bb24-04640babc65c" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497727 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="config-reloader" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497735 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="config-reloader" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497749 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" containerName="swift-ring-rebalance" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497757 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" containerName="swift-ring-rebalance" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497772 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="init-config-reloader" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497780 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="init-config-reloader" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497797 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="prometheus" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497804 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="prometheus" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497813 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b5aaf3-84d2-47b2-8d21-684f354813f8" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497822 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b5aaf3-84d2-47b2-8d21-684f354813f8" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497837 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1de7e5d-05c7-4b56-9896-41aece1133fe" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497847 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1de7e5d-05c7-4b56-9896-41aece1133fe" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497857 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859cf314-a4cb-4952-8445-91d01ee03ca9" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497866 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="859cf314-a4cb-4952-8445-91d01ee03ca9" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: E0122 15:43:51.497881 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7bf140-ffaa-4907-a659-0e00718698e0" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.497889 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7bf140-ffaa-4907-a659-0e00718698e0" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498149 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="thanos-sidecar" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498167 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1de7e5d-05c7-4b56-9896-41aece1133fe" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498180 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b5aaf3-84d2-47b2-8d21-684f354813f8" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498196 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c69ccb-9024-4ce2-bb24-04640babc65c" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498205 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="config-reloader" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498218 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a6798c-df21-4a31-a652-836868719f0e" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498233 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="933073b1-0e88-4f40-9c8d-12050b5ccc0a" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498246 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc9ba42-f6cd-48ac-b240-d2d764abe4a2" containerName="swift-ring-rebalance" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498258 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" containerName="prometheus" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498267 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d719b91-8720-4e1c-9be9-a94cf9f3b15c" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498282 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea3602a-5333-49e3-ba9f-041bdd79218f" containerName="mariadb-account-create-update" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498292 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="859cf314-a4cb-4952-8445-91d01ee03ca9" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.498301 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b7bf140-ffaa-4907-a659-0e00718698e0" containerName="mariadb-database-create" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.503843 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.506120 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.506346 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.506604 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g98sx" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.520176 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.520833 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.521614 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.521897 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.522653 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.536397 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.549919 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e4c8b8-1127-4f5a-9fe7-51f8b2478388" path="/var/lib/kubelet/pods/96e4c8b8-1127-4f5a-9fe7-51f8b2478388/volumes" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.553165 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.602992 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603096 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603133 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/667e755d-b6f5-4280-9640-a7a893684b7f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603191 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603231 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603277 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bnhl\" (UniqueName: \"kubernetes.io/projected/667e755d-b6f5-4280-9640-a7a893684b7f-kube-api-access-4bnhl\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603313 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603338 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603587 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/667e755d-b6f5-4280-9640-a7a893684b7f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603764 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.603962 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.604129 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.604954 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-config\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.706748 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.706853 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707047 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707549 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-config\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707577 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707628 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707654 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/667e755d-b6f5-4280-9640-a7a893684b7f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707707 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707740 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707821 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bnhl\" (UniqueName: \"kubernetes.io/projected/667e755d-b6f5-4280-9640-a7a893684b7f-kube-api-access-4bnhl\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707855 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707876 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.707900 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/667e755d-b6f5-4280-9640-a7a893684b7f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.708964 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.711839 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.712031 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/667e755d-b6f5-4280-9640-a7a893684b7f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.712541 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.712666 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.713694 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.714729 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.714766 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f384ca950549f4e6139e9d3c1ffd101c55a7a0c2a28a49f66cc0b4e36aaf3b93/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.715018 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.715597 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/667e755d-b6f5-4280-9640-a7a893684b7f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.716581 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/667e755d-b6f5-4280-9640-a7a893684b7f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.720843 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-config\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.720861 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/667e755d-b6f5-4280-9640-a7a893684b7f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.735932 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bnhl\" (UniqueName: \"kubernetes.io/projected/667e755d-b6f5-4280-9640-a7a893684b7f-kube-api-access-4bnhl\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.774904 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-48922b32-77bd-4a41-8c12-8c747fe3bcf7\") pod \"prometheus-metric-storage-0\" (UID: \"667e755d-b6f5-4280-9640-a7a893684b7f\") " pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:51 crc kubenswrapper[4825]: I0122 15:43:51.875161 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 15:43:52 crc kubenswrapper[4825]: I0122 15:43:52.351432 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 15:43:52 crc kubenswrapper[4825]: W0122 15:43:52.421500 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod667e755d_b6f5_4280_9640_a7a893684b7f.slice/crio-3c3e8c6fa0c878d2c0d25fc8608ad059f92c8891d856788185e0e65ed4b90b69 WatchSource:0}: Error finding container 3c3e8c6fa0c878d2c0d25fc8608ad059f92c8891d856788185e0e65ed4b90b69: Status 404 returned error can't find the container with id 3c3e8c6fa0c878d2c0d25fc8608ad059f92c8891d856788185e0e65ed4b90b69 Jan 22 15:43:53 crc kubenswrapper[4825]: I0122 15:43:53.547521 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"667e755d-b6f5-4280-9640-a7a893684b7f","Type":"ContainerStarted","Data":"3c3e8c6fa0c878d2c0d25fc8608ad059f92c8891d856788185e0e65ed4b90b69"} Jan 22 15:43:53 crc kubenswrapper[4825]: I0122 15:43:53.547807 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"e408b15eeac57dc059a5dd90d4bcde572f0dd3e5eb02a336873ddc172c7219e7"} Jan 22 15:43:53 crc kubenswrapper[4825]: I0122 15:43:53.547819 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"a10f78a136d31dc096f4d3a75d6dac7238b3b572abf01d8d6bca3ce4ff5936e7"} Jan 22 15:43:53 crc kubenswrapper[4825]: I0122 15:43:53.547828 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"db6aaf689251d973608aaf20dc3f0494083c37e266b8feb56dfc44f260e3b7a9"} Jan 22 15:43:53 crc kubenswrapper[4825]: I0122 15:43:53.547836 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"50bc9307c187ff78705c629bc5317e5732b9240056e3a14f1e48651bda8dc9a9"} Jan 22 15:43:55 crc kubenswrapper[4825]: I0122 15:43:55.564174 4825 generic.go:334] "Generic (PLEG): container finished" podID="901e6ba7-980d-4fee-acbd-5aa8314aed8e" containerID="a7f7a7a114493dfdb4ad180cfca393e175a61a2114ffb28e6e67efd950d9dbf7" exitCode=0 Jan 22 15:43:55 crc kubenswrapper[4825]: I0122 15:43:55.564727 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2gv7p" event={"ID":"901e6ba7-980d-4fee-acbd-5aa8314aed8e","Type":"ContainerDied","Data":"a7f7a7a114493dfdb4ad180cfca393e175a61a2114ffb28e6e67efd950d9dbf7"} Jan 22 15:43:56 crc kubenswrapper[4825]: I0122 15:43:56.581415 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"667e755d-b6f5-4280-9640-a7a893684b7f","Type":"ContainerStarted","Data":"48e9e0f119fbe4d06ec85a9e4863d0f0a09e4d15efcd3ff0736beadb9b452f31"} Jan 22 15:43:56 crc kubenswrapper[4825]: I0122 15:43:56.928852 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.067125 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-config-data\") pod \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.067862 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-combined-ca-bundle\") pod \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.067966 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq98r\" (UniqueName: \"kubernetes.io/projected/901e6ba7-980d-4fee-acbd-5aa8314aed8e-kube-api-access-rq98r\") pod \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\" (UID: \"901e6ba7-980d-4fee-acbd-5aa8314aed8e\") " Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.071388 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901e6ba7-980d-4fee-acbd-5aa8314aed8e-kube-api-access-rq98r" (OuterVolumeSpecName: "kube-api-access-rq98r") pod "901e6ba7-980d-4fee-acbd-5aa8314aed8e" (UID: "901e6ba7-980d-4fee-acbd-5aa8314aed8e"). InnerVolumeSpecName "kube-api-access-rq98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.106157 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "901e6ba7-980d-4fee-acbd-5aa8314aed8e" (UID: "901e6ba7-980d-4fee-acbd-5aa8314aed8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.128094 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-config-data" (OuterVolumeSpecName: "config-data") pod "901e6ba7-980d-4fee-acbd-5aa8314aed8e" (UID: "901e6ba7-980d-4fee-acbd-5aa8314aed8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.170762 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.170800 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq98r\" (UniqueName: \"kubernetes.io/projected/901e6ba7-980d-4fee-acbd-5aa8314aed8e-kube-api-access-rq98r\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.170819 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/901e6ba7-980d-4fee-acbd-5aa8314aed8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.591576 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2gv7p" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.592032 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2gv7p" event={"ID":"901e6ba7-980d-4fee-acbd-5aa8314aed8e","Type":"ContainerDied","Data":"d936de390b97127a8cdce73f800d54ce9edfcfa18df49b2b5899d907652ee8d0"} Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.592100 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d936de390b97127a8cdce73f800d54ce9edfcfa18df49b2b5899d907652ee8d0" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.595706 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"f7ef449de9832cb9d1b8b81b4ff41969edd2f7efcc7be358d0fdff699062f162"} Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.595741 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"b04f1f1edb55ff1209035793e913d9d3a0c8b535a15f5c7e15fdc8ed29e997f0"} Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.595755 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"020b0ef3e615e648ae601c373c42ffa376bec6e08d2a06d40a9cc680b27c2a4f"} Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.595765 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"93607a0fbf26f7d30b56d5e5c539f5892aed13f71b2d87be52b9c6ee0f6ddaff"} Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.917158 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zj6lt"] Jan 22 15:43:57 crc kubenswrapper[4825]: E0122 15:43:57.918299 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901e6ba7-980d-4fee-acbd-5aa8314aed8e" containerName="keystone-db-sync" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.918330 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="901e6ba7-980d-4fee-acbd-5aa8314aed8e" containerName="keystone-db-sync" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.918645 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="901e6ba7-980d-4fee-acbd-5aa8314aed8e" containerName="keystone-db-sync" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.927797 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.947925 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8m8d9"] Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.949825 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.955527 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g9wlz" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.955728 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.955865 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.959913 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8m8d9"] Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.961351 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.964209 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 15:43:57 crc kubenswrapper[4825]: I0122 15:43:57.971220 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zj6lt"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.023921 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.023993 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-combined-ca-bundle\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024025 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024052 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-credential-keys\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024079 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-fernet-keys\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024108 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-config\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024173 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66sds\" (UniqueName: \"kubernetes.io/projected/a1af22fb-1bea-49bf-8458-5a0f68e472c4-kube-api-access-66sds\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024199 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-config-data\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024219 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbmj\" (UniqueName: \"kubernetes.io/projected/2ade185c-19d9-4085-a9e0-b2c011665989-kube-api-access-7kbmj\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024237 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-scripts\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.024298 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.102723 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-54vjv"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.104299 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.116782 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hgzqk" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.117342 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.117577 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128486 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-combined-ca-bundle\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128545 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128572 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-credential-keys\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128599 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-fernet-keys\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128626 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-config\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128686 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspld\" (UniqueName: \"kubernetes.io/projected/7211decb-e02d-47e6-9ea7-493e8e6a3743-kube-api-access-dspld\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128706 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66sds\" (UniqueName: \"kubernetes.io/projected/a1af22fb-1bea-49bf-8458-5a0f68e472c4-kube-api-access-66sds\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128723 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-scripts\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128748 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-config-data\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128771 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kbmj\" (UniqueName: \"kubernetes.io/projected/2ade185c-19d9-4085-a9e0-b2c011665989-kube-api-access-7kbmj\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.128788 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-combined-ca-bundle\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.129879 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-scripts\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.129914 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-config-data\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.130019 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.130043 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-db-sync-config-data\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.130066 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7211decb-e02d-47e6-9ea7-493e8e6a3743-etc-machine-id\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.130097 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.131388 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.132603 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.133423 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-54vjv"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.133828 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.134201 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-config\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.145737 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-scripts\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.146155 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-credential-keys\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.146411 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-config-data\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.146582 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-combined-ca-bundle\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.154440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-fernet-keys\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.179298 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kbmj\" (UniqueName: \"kubernetes.io/projected/2ade185c-19d9-4085-a9e0-b2c011665989-kube-api-access-7kbmj\") pod \"keystone-bootstrap-8m8d9\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.203212 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-72pdg"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.204470 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.208783 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66sds\" (UniqueName: \"kubernetes.io/projected/a1af22fb-1bea-49bf-8458-5a0f68e472c4-kube-api-access-66sds\") pod \"dnsmasq-dns-5c9d85d47c-zj6lt\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.214512 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-958wc" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.215099 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.215208 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233166 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-db-sync-config-data\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233214 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7211decb-e02d-47e6-9ea7-493e8e6a3743-etc-machine-id\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233244 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-combined-ca-bundle\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233347 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspld\" (UniqueName: \"kubernetes.io/projected/7211decb-e02d-47e6-9ea7-493e8e6a3743-kube-api-access-dspld\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233369 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-scripts\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233396 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-combined-ca-bundle\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233417 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-config-data\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233437 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84pg\" (UniqueName: \"kubernetes.io/projected/fbc53412-3b60-473b-a918-df5872264c8e-kube-api-access-n84pg\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.233476 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-config\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.234209 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7211decb-e02d-47e6-9ea7-493e8e6a3743-etc-machine-id\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.238747 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-combined-ca-bundle\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.251059 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.254926 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-scripts\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.255537 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-db-sync-config-data\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.284649 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.286423 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-config-data\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.324130 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-72pdg"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.359528 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspld\" (UniqueName: \"kubernetes.io/projected/7211decb-e02d-47e6-9ea7-493e8e6a3743-kube-api-access-dspld\") pod \"cinder-db-sync-54vjv\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.408325 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n84pg\" (UniqueName: \"kubernetes.io/projected/fbc53412-3b60-473b-a918-df5872264c8e-kube-api-access-n84pg\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.408413 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-config\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.408485 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-combined-ca-bundle\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.420299 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.422838 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.429042 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.429455 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-config\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.429690 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.430898 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-combined-ca-bundle\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.458515 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n84pg\" (UniqueName: \"kubernetes.io/projected/fbc53412-3b60-473b-a918-df5872264c8e-kube-api-access-n84pg\") pod \"neutron-db-sync-72pdg\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.519692 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t8c8z"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.522351 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.529467 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.530282 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.530698 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hjlrl" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.538083 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.550839 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zj6lt"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.562472 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8c8z"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.573875 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6fxh4"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.575718 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.583930 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jd6zq" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.584557 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.584944 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6fxh4"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.591818 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7zl97"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.594218 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.600887 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7zl97"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632448 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-scripts\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632504 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632527 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdgl\" (UniqueName: \"kubernetes.io/projected/d2480086-9709-4e61-af71-042055623d32-kube-api-access-7bdgl\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632582 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dckz9\" (UniqueName: \"kubernetes.io/projected/e224d84c-5d76-451a-9c81-bdf42336c375-kube-api-access-dckz9\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632603 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-log-httpd\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632619 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632640 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-scripts\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632669 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2480086-9709-4e61-af71-042055623d32-logs\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632718 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-run-httpd\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632732 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-combined-ca-bundle\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632770 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-config-data\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.632807 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-config-data\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.633077 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-54vjv" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.672412 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-cshtw"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.674016 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.681041 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-nn5s9" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.681315 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.681563 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.681776 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.700285 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-cshtw"] Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.708484 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-72pdg" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.736947 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737005 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737028 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdgl\" (UniqueName: \"kubernetes.io/projected/d2480086-9709-4e61-af71-042055623d32-kube-api-access-7bdgl\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737046 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-config\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737092 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dckz9\" (UniqueName: \"kubernetes.io/projected/e224d84c-5d76-451a-9c81-bdf42336c375-kube-api-access-dckz9\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737114 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-log-httpd\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737143 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737161 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-scripts\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737183 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737203 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2480086-9709-4e61-af71-042055623d32-logs\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737221 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76vn\" (UniqueName: \"kubernetes.io/projected/5f53b9a1-c843-4044-a465-0a8ea7734c1f-kube-api-access-w76vn\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737240 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737288 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-combined-ca-bundle\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737308 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-run-httpd\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737327 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-combined-ca-bundle\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737358 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-config-data\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737391 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-db-sync-config-data\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737413 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhrx4\" (UniqueName: \"kubernetes.io/projected/589c7924-baff-443f-b923-59a1348c709a-kube-api-access-qhrx4\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737428 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-config-data\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.737454 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-scripts\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.738629 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-run-httpd\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.738939 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2480086-9709-4e61-af71-042055623d32-logs\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.751090 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-log-httpd\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.755771 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.761910 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-config-data\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.762433 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-scripts\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.787866 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-scripts\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.800079 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-combined-ca-bundle\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.812031 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.816859 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dckz9\" (UniqueName: \"kubernetes.io/projected/e224d84c-5d76-451a-9c81-bdf42336c375-kube-api-access-dckz9\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.820731 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-config-data\") pod \"ceilometer-0\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " pod="openstack/ceilometer-0" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842500 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-combined-ca-bundle\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842549 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-certs\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842580 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-scripts\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842612 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-combined-ca-bundle\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842648 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-db-sync-config-data\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842666 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhrx4\" (UniqueName: \"kubernetes.io/projected/589c7924-baff-443f-b923-59a1348c709a-kube-api-access-qhrx4\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842684 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-config-data\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842720 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842748 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-config\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842819 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842840 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76vn\" (UniqueName: \"kubernetes.io/projected/5f53b9a1-c843-4044-a465-0a8ea7734c1f-kube-api-access-w76vn\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842861 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.842880 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mst8x\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-kube-api-access-mst8x\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.849188 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.849708 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.850281 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.855812 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-combined-ca-bundle\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.855890 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdgl\" (UniqueName: \"kubernetes.io/projected/d2480086-9709-4e61-af71-042055623d32-kube-api-access-7bdgl\") pod \"placement-db-sync-t8c8z\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.856331 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-config\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.869494 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8c8z" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.873033 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76vn\" (UniqueName: \"kubernetes.io/projected/5f53b9a1-c843-4044-a465-0a8ea7734c1f-kube-api-access-w76vn\") pod \"dnsmasq-dns-6ffb94d8ff-7zl97\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.887628 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-db-sync-config-data\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.902353 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhrx4\" (UniqueName: \"kubernetes.io/projected/589c7924-baff-443f-b923-59a1348c709a-kube-api-access-qhrx4\") pod \"barbican-db-sync-6fxh4\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.945947 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mst8x\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-kube-api-access-mst8x\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.946087 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-certs\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.946133 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-scripts\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.946199 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-combined-ca-bundle\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.946252 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-config-data\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.979819 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-scripts\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.980156 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-config-data\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.980595 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-combined-ca-bundle\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.981186 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-certs\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:58 crc kubenswrapper[4825]: I0122 15:43:58.986818 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mst8x\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-kube-api-access-mst8x\") pod \"cloudkitty-db-sync-cshtw\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.037091 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.064657 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.080199 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.152901 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.281901 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zj6lt"] Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.412882 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8m8d9"] Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.633141 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8m8d9" event={"ID":"2ade185c-19d9-4085-a9e0-b2c011665989","Type":"ContainerStarted","Data":"2491e05064a6e8abf765a779396b8341f7e8db1e7053a0bc5e478bc5ffa20e97"} Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.635156 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" event={"ID":"a1af22fb-1bea-49bf-8458-5a0f68e472c4","Type":"ContainerStarted","Data":"e63e87849992c4ae75cdd3fcbd6a8f5edd97146d78e6b385a580c11f456fa02a"} Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.635182 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" event={"ID":"a1af22fb-1bea-49bf-8458-5a0f68e472c4","Type":"ContainerStarted","Data":"ce228ea857bccb42ae90191d1853abf114e3ce95eda538e29e8d6e98e4d10593"} Jan 22 15:43:59 crc kubenswrapper[4825]: I0122 15:43:59.657704 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-72pdg"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.054318 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6fxh4"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.095594 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7zl97"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.118798 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-54vjv"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.164376 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8c8z"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.251330 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.430374 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-cshtw"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.443807 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.655455 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-72pdg" event={"ID":"fbc53412-3b60-473b-a918-df5872264c8e","Type":"ContainerStarted","Data":"d474c70c077c3c1a66580d8ea3df936ae81927c8b3bd1312fa5f979e46fb0e71"} Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.656777 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-72pdg" event={"ID":"fbc53412-3b60-473b-a918-df5872264c8e","Type":"ContainerStarted","Data":"29163a2ac99e6a14fb61dfc7e0a039405fcf0b358cd68d1043cedeb7a10ebb3d"} Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.664108 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8m8d9" event={"ID":"2ade185c-19d9-4085-a9e0-b2c011665989","Type":"ContainerStarted","Data":"068f8f66108a9aeaf25df845bbf6b3e92de862d40a0e022c96fdecd65997b158"} Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.667340 4825 generic.go:334] "Generic (PLEG): container finished" podID="a1af22fb-1bea-49bf-8458-5a0f68e472c4" containerID="e63e87849992c4ae75cdd3fcbd6a8f5edd97146d78e6b385a580c11f456fa02a" exitCode=0 Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.667385 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" event={"ID":"a1af22fb-1bea-49bf-8458-5a0f68e472c4","Type":"ContainerDied","Data":"e63e87849992c4ae75cdd3fcbd6a8f5edd97146d78e6b385a580c11f456fa02a"} Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.678426 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-72pdg" podStartSLOduration=2.678409419 podStartE2EDuration="2.678409419s" podCreationTimestamp="2026-01-22 15:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:00.671769868 +0000 UTC m=+1187.433296778" watchObservedRunningTime="2026-01-22 15:44:00.678409419 +0000 UTC m=+1187.439936329" Jan 22 15:44:00 crc kubenswrapper[4825]: I0122 15:44:00.700895 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8m8d9" podStartSLOduration=3.700856563 podStartE2EDuration="3.700856563s" podCreationTimestamp="2026-01-22 15:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:00.689093045 +0000 UTC m=+1187.450619955" watchObservedRunningTime="2026-01-22 15:44:00.700856563 +0000 UTC m=+1187.462383473" Jan 22 15:44:00 crc kubenswrapper[4825]: W0122 15:44:00.746534 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod589c7924_baff_443f_b923_59a1348c709a.slice/crio-de55792865ba5a02c129844a00905c9b042bd407f7ec656fe41368c8fafe5d2f WatchSource:0}: Error finding container de55792865ba5a02c129844a00905c9b042bd407f7ec656fe41368c8fafe5d2f: Status 404 returned error can't find the container with id de55792865ba5a02c129844a00905c9b042bd407f7ec656fe41368c8fafe5d2f Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.035698 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.041801 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66sds\" (UniqueName: \"kubernetes.io/projected/a1af22fb-1bea-49bf-8458-5a0f68e472c4-kube-api-access-66sds\") pod \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.042005 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-sb\") pod \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.042451 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-nb\") pod \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.042520 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-config\") pod \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.042546 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-dns-svc\") pod \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\" (UID: \"a1af22fb-1bea-49bf-8458-5a0f68e472c4\") " Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.046679 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1af22fb-1bea-49bf-8458-5a0f68e472c4-kube-api-access-66sds" (OuterVolumeSpecName: "kube-api-access-66sds") pod "a1af22fb-1bea-49bf-8458-5a0f68e472c4" (UID: "a1af22fb-1bea-49bf-8458-5a0f68e472c4"). InnerVolumeSpecName "kube-api-access-66sds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.111663 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1af22fb-1bea-49bf-8458-5a0f68e472c4" (UID: "a1af22fb-1bea-49bf-8458-5a0f68e472c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.124704 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1af22fb-1bea-49bf-8458-5a0f68e472c4" (UID: "a1af22fb-1bea-49bf-8458-5a0f68e472c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.133956 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-config" (OuterVolumeSpecName: "config") pod "a1af22fb-1bea-49bf-8458-5a0f68e472c4" (UID: "a1af22fb-1bea-49bf-8458-5a0f68e472c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.145352 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.145399 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.145431 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.145456 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66sds\" (UniqueName: \"kubernetes.io/projected/a1af22fb-1bea-49bf-8458-5a0f68e472c4-kube-api-access-66sds\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.157552 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1af22fb-1bea-49bf-8458-5a0f68e472c4" (UID: "a1af22fb-1bea-49bf-8458-5a0f68e472c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.246358 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1af22fb-1bea-49bf-8458-5a0f68e472c4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.708675 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"610f2a586a41909d7929cfb77e2c1bdda11577f931d70af321ffd07c04987064"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.713151 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.713374 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zj6lt" event={"ID":"a1af22fb-1bea-49bf-8458-5a0f68e472c4","Type":"ContainerDied","Data":"ce228ea857bccb42ae90191d1853abf114e3ce95eda538e29e8d6e98e4d10593"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.713590 4825 scope.go:117] "RemoveContainer" containerID="e63e87849992c4ae75cdd3fcbd6a8f5edd97146d78e6b385a580c11f456fa02a" Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.735530 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-cshtw" event={"ID":"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e","Type":"ContainerStarted","Data":"5bf2e54c15e1004ba3028e4f9470c6b103592565f3d6c8e1ca192ef34da4ba83"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.737196 4825 generic.go:334] "Generic (PLEG): container finished" podID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerID="b9f48ef41b6562a93718b9d93f3b48d21018d719d3dbb0f2cedb5ad10af69c4e" exitCode=0 Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.737241 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" event={"ID":"5f53b9a1-c843-4044-a465-0a8ea7734c1f","Type":"ContainerDied","Data":"b9f48ef41b6562a93718b9d93f3b48d21018d719d3dbb0f2cedb5ad10af69c4e"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.737257 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" event={"ID":"5f53b9a1-c843-4044-a465-0a8ea7734c1f","Type":"ContainerStarted","Data":"26faf806db4118b619f075717942b2814ec9a6040ce5a18231b0526f9250bccb"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.747249 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8c8z" event={"ID":"d2480086-9709-4e61-af71-042055623d32","Type":"ContainerStarted","Data":"babca5fe3ed1533d2c569e8fbb2ffabc2b6f6badd6963c3ee342df6219373b51"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.757371 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerStarted","Data":"886c52e70108b27db693d2a51d60620639aef7096c69ad261f1eb4796bde0c0f"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.764284 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fxh4" event={"ID":"589c7924-baff-443f-b923-59a1348c709a","Type":"ContainerStarted","Data":"de55792865ba5a02c129844a00905c9b042bd407f7ec656fe41368c8fafe5d2f"} Jan 22 15:44:01 crc kubenswrapper[4825]: I0122 15:44:01.788707 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-54vjv" event={"ID":"7211decb-e02d-47e6-9ea7-493e8e6a3743","Type":"ContainerStarted","Data":"32a781f74fd2048e112452706ee998cf2c328818530bb5df2f8158dc3b2de71b"} Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.028852 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zj6lt"] Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.048899 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zj6lt"] Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.834274 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"47d5c7d1d948d9949cc57196f170d928a572347cd23e5a807f1f70199015cb27"} Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.834656 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"b65be04306a16ebc9e1a890acfda81b0f8ed06f42c81d155496bde1865cc7457"} Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.851285 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" event={"ID":"5f53b9a1-c843-4044-a465-0a8ea7734c1f","Type":"ContainerStarted","Data":"412dd9be81ce6526e59f86ae0acb43dd1cf93de6048b7c0c1fe775ac78326289"} Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.853059 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.864848 4825 generic.go:334] "Generic (PLEG): container finished" podID="667e755d-b6f5-4280-9640-a7a893684b7f" containerID="48e9e0f119fbe4d06ec85a9e4863d0f0a09e4d15efcd3ff0736beadb9b452f31" exitCode=0 Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.864908 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"667e755d-b6f5-4280-9640-a7a893684b7f","Type":"ContainerDied","Data":"48e9e0f119fbe4d06ec85a9e4863d0f0a09e4d15efcd3ff0736beadb9b452f31"} Jan 22 15:44:02 crc kubenswrapper[4825]: I0122 15:44:02.876924 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" podStartSLOduration=4.876896188 podStartE2EDuration="4.876896188s" podCreationTimestamp="2026-01-22 15:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:02.874505749 +0000 UTC m=+1189.636032669" watchObservedRunningTime="2026-01-22 15:44:02.876896188 +0000 UTC m=+1189.638423098" Jan 22 15:44:03 crc kubenswrapper[4825]: I0122 15:44:03.542510 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1af22fb-1bea-49bf-8458-5a0f68e472c4" path="/var/lib/kubelet/pods/a1af22fb-1bea-49bf-8458-5a0f68e472c4/volumes" Jan 22 15:44:03 crc kubenswrapper[4825]: I0122 15:44:03.904398 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"4581051142fc4317afef612dbba3c684e25026e6f8c5b019bb3f57192a429abc"} Jan 22 15:44:03 crc kubenswrapper[4825]: I0122 15:44:03.904652 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"1a2831b65b560bf967aaeb7977c74950a307e0b4060dfb44e1d9e1f9401ca227"} Jan 22 15:44:03 crc kubenswrapper[4825]: I0122 15:44:03.904661 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"e0b10e5c9f44511f5e1842bdab0451e4c3a085bd8a71adb4a80d833fc8e365ac"} Jan 22 15:44:03 crc kubenswrapper[4825]: I0122 15:44:03.908878 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"667e755d-b6f5-4280-9640-a7a893684b7f","Type":"ContainerStarted","Data":"813464358b56611e074fd09f4f3190e9673c6ce2be4212f856d14188e969f3aa"} Jan 22 15:44:04 crc kubenswrapper[4825]: I0122 15:44:04.925882 4825 generic.go:334] "Generic (PLEG): container finished" podID="2ade185c-19d9-4085-a9e0-b2c011665989" containerID="068f8f66108a9aeaf25df845bbf6b3e92de862d40a0e022c96fdecd65997b158" exitCode=0 Jan 22 15:44:04 crc kubenswrapper[4825]: I0122 15:44:04.925973 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8m8d9" event={"ID":"2ade185c-19d9-4085-a9e0-b2c011665989","Type":"ContainerDied","Data":"068f8f66108a9aeaf25df845bbf6b3e92de862d40a0e022c96fdecd65997b158"} Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.549592 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.549645 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.569463 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.570499 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e2cd9ccac91574642f11cb7b9691d30ced63e64cba6f480b19075fcb4ac2cb1"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.570579 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://2e2cd9ccac91574642f11cb7b9691d30ced63e64cba6f480b19075fcb4ac2cb1" gracePeriod=600 Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.938539 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6p27" event={"ID":"ffcfdefe-f831-469c-9423-6cd4399435a7","Type":"ContainerStarted","Data":"e18cf9e77c5bc94544ef3c1a26b9abea5ff78a0a040ad43c4d195ff82daa2664"} Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.942226 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="2e2cd9ccac91574642f11cb7b9691d30ced63e64cba6f480b19075fcb4ac2cb1" exitCode=0 Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.942294 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"2e2cd9ccac91574642f11cb7b9691d30ced63e64cba6f480b19075fcb4ac2cb1"} Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.942355 4825 scope.go:117] "RemoveContainer" containerID="7c411fc0ec7bfe151046cb879197a0f2e7e0a4bd2d89c00b4f28d59849883ce9" Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.970491 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r6p27" podStartSLOduration=3.733344965 podStartE2EDuration="37.970464216s" podCreationTimestamp="2026-01-22 15:43:28 +0000 UTC" firstStartedPulling="2026-01-22 15:43:29.805746101 +0000 UTC m=+1156.567273011" lastFinishedPulling="2026-01-22 15:44:04.042865352 +0000 UTC m=+1190.804392262" observedRunningTime="2026-01-22 15:44:05.958706149 +0000 UTC m=+1192.720233079" watchObservedRunningTime="2026-01-22 15:44:05.970464216 +0000 UTC m=+1192.731991126" Jan 22 15:44:05 crc kubenswrapper[4825]: I0122 15:44:05.971784 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"62f00afd-c39a-409f-ba5e-b5474959717b","Type":"ContainerStarted","Data":"15967d82f1b499729c98a7bf0c13de0a98e03cd2837dbf1457649c48c3218edf"} Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.346720 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=50.049007511 podStartE2EDuration="1m0.346695264s" podCreationTimestamp="2026-01-22 15:43:06 +0000 UTC" firstStartedPulling="2026-01-22 15:43:50.54150469 +0000 UTC m=+1177.303031640" lastFinishedPulling="2026-01-22 15:44:00.839192483 +0000 UTC m=+1187.600719393" observedRunningTime="2026-01-22 15:44:06.022193971 +0000 UTC m=+1192.783720871" watchObservedRunningTime="2026-01-22 15:44:06.346695264 +0000 UTC m=+1193.108222174" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.349120 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7zl97"] Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.349361 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" containerID="cri-o://412dd9be81ce6526e59f86ae0acb43dd1cf93de6048b7c0c1fe775ac78326289" gracePeriod=10 Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.401956 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-55djf"] Jan 22 15:44:06 crc kubenswrapper[4825]: E0122 15:44:06.402387 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1af22fb-1bea-49bf-8458-5a0f68e472c4" containerName="init" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.402400 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1af22fb-1bea-49bf-8458-5a0f68e472c4" containerName="init" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.404203 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1af22fb-1bea-49bf-8458-5a0f68e472c4" containerName="init" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.405364 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.411775 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.423803 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-55djf"] Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.537550 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.537615 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.537641 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf52c\" (UniqueName: \"kubernetes.io/projected/0899ccaa-6936-4c34-92d3-e579cb0f0bea-kube-api-access-hf52c\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.537665 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-config\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.537827 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.538174 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.639671 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.639833 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.640643 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.640762 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.640956 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.640999 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.641016 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf52c\" (UniqueName: \"kubernetes.io/projected/0899ccaa-6936-4c34-92d3-e579cb0f0bea-kube-api-access-hf52c\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.641047 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-config\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.641656 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-config\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.642749 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.643305 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.664449 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf52c\" (UniqueName: \"kubernetes.io/projected/0899ccaa-6936-4c34-92d3-e579cb0f0bea-kube-api-access-hf52c\") pod \"dnsmasq-dns-cf78879c9-55djf\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.748568 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.986025 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"667e755d-b6f5-4280-9640-a7a893684b7f","Type":"ContainerStarted","Data":"0dd614381bd711f5d67f192fe3720ddab5f5385fee2e0175762b3f08b8580b05"} Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.989343 4825 generic.go:334] "Generic (PLEG): container finished" podID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerID="412dd9be81ce6526e59f86ae0acb43dd1cf93de6048b7c0c1fe775ac78326289" exitCode=0 Jan 22 15:44:06 crc kubenswrapper[4825]: I0122 15:44:06.989927 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" event={"ID":"5f53b9a1-c843-4044-a465-0a8ea7734c1f","Type":"ContainerDied","Data":"412dd9be81ce6526e59f86ae0acb43dd1cf93de6048b7c0c1fe775ac78326289"} Jan 22 15:44:09 crc kubenswrapper[4825]: I0122 15:44:09.087465 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.808444 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.965115 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-combined-ca-bundle\") pod \"2ade185c-19d9-4085-a9e0-b2c011665989\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.965393 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-credential-keys\") pod \"2ade185c-19d9-4085-a9e0-b2c011665989\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.965452 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kbmj\" (UniqueName: \"kubernetes.io/projected/2ade185c-19d9-4085-a9e0-b2c011665989-kube-api-access-7kbmj\") pod \"2ade185c-19d9-4085-a9e0-b2c011665989\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.965489 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-scripts\") pod \"2ade185c-19d9-4085-a9e0-b2c011665989\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.965505 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-config-data\") pod \"2ade185c-19d9-4085-a9e0-b2c011665989\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.965531 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-fernet-keys\") pod \"2ade185c-19d9-4085-a9e0-b2c011665989\" (UID: \"2ade185c-19d9-4085-a9e0-b2c011665989\") " Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.971726 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2ade185c-19d9-4085-a9e0-b2c011665989" (UID: "2ade185c-19d9-4085-a9e0-b2c011665989"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.971784 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ade185c-19d9-4085-a9e0-b2c011665989-kube-api-access-7kbmj" (OuterVolumeSpecName: "kube-api-access-7kbmj") pod "2ade185c-19d9-4085-a9e0-b2c011665989" (UID: "2ade185c-19d9-4085-a9e0-b2c011665989"). InnerVolumeSpecName "kube-api-access-7kbmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.973482 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-scripts" (OuterVolumeSpecName: "scripts") pod "2ade185c-19d9-4085-a9e0-b2c011665989" (UID: "2ade185c-19d9-4085-a9e0-b2c011665989"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.988749 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2ade185c-19d9-4085-a9e0-b2c011665989" (UID: "2ade185c-19d9-4085-a9e0-b2c011665989"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:10 crc kubenswrapper[4825]: I0122 15:44:10.996562 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-config-data" (OuterVolumeSpecName: "config-data") pod "2ade185c-19d9-4085-a9e0-b2c011665989" (UID: "2ade185c-19d9-4085-a9e0-b2c011665989"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.009943 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ade185c-19d9-4085-a9e0-b2c011665989" (UID: "2ade185c-19d9-4085-a9e0-b2c011665989"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.029288 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8m8d9" event={"ID":"2ade185c-19d9-4085-a9e0-b2c011665989","Type":"ContainerDied","Data":"2491e05064a6e8abf765a779396b8341f7e8db1e7053a0bc5e478bc5ffa20e97"} Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.029330 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2491e05064a6e8abf765a779396b8341f7e8db1e7053a0bc5e478bc5ffa20e97" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.029361 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8m8d9" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.067738 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.067775 4825 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.067787 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kbmj\" (UniqueName: \"kubernetes.io/projected/2ade185c-19d9-4085-a9e0-b2c011665989-kube-api-access-7kbmj\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.067799 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.067809 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.067818 4825 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ade185c-19d9-4085-a9e0-b2c011665989-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.922700 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8m8d9"] Jan 22 15:44:11 crc kubenswrapper[4825]: I0122 15:44:11.933846 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8m8d9"] Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.008067 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kwfvj"] Jan 22 15:44:12 crc kubenswrapper[4825]: E0122 15:44:12.009554 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ade185c-19d9-4085-a9e0-b2c011665989" containerName="keystone-bootstrap" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.009581 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ade185c-19d9-4085-a9e0-b2c011665989" containerName="keystone-bootstrap" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.010099 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ade185c-19d9-4085-a9e0-b2c011665989" containerName="keystone-bootstrap" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.011320 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.015765 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.015879 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.016100 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g9wlz" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.019752 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.037197 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kwfvj"] Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.136152 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-config-data\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.136207 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-fernet-keys\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.136485 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-scripts\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.136536 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-combined-ca-bundle\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.136598 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-credential-keys\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.136669 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bd4\" (UniqueName: \"kubernetes.io/projected/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-kube-api-access-v9bd4\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.238415 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-config-data\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.238692 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-fernet-keys\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.238894 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-scripts\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.238995 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-combined-ca-bundle\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.239078 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-credential-keys\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.239179 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bd4\" (UniqueName: \"kubernetes.io/projected/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-kube-api-access-v9bd4\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.242910 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-fernet-keys\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.243162 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-scripts\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.244675 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-credential-keys\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.245644 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-config-data\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.246519 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-combined-ca-bundle\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.256395 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bd4\" (UniqueName: \"kubernetes.io/projected/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-kube-api-access-v9bd4\") pod \"keystone-bootstrap-kwfvj\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:12 crc kubenswrapper[4825]: I0122 15:44:12.332711 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:13 crc kubenswrapper[4825]: I0122 15:44:13.534766 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ade185c-19d9-4085-a9e0-b2c011665989" path="/var/lib/kubelet/pods/2ade185c-19d9-4085-a9e0-b2c011665989/volumes" Jan 22 15:44:14 crc kubenswrapper[4825]: I0122 15:44:14.087529 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Jan 22 15:44:14 crc kubenswrapper[4825]: E0122 15:44:14.668829 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 22 15:44:14 crc kubenswrapper[4825]: E0122 15:44:14.669320 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-t8c8z_openstack(d2480086-9709-4e61-af71-042055623d32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:44:14 crc kubenswrapper[4825]: E0122 15:44:14.671132 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-t8c8z" podUID="d2480086-9709-4e61-af71-042055623d32" Jan 22 15:44:15 crc kubenswrapper[4825]: E0122 15:44:15.074267 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-t8c8z" podUID="d2480086-9709-4e61-af71-042055623d32" Jan 22 15:44:19 crc kubenswrapper[4825]: I0122 15:44:19.116294 4825 generic.go:334] "Generic (PLEG): container finished" podID="ffcfdefe-f831-469c-9423-6cd4399435a7" containerID="e18cf9e77c5bc94544ef3c1a26b9abea5ff78a0a040ad43c4d195ff82daa2664" exitCode=0 Jan 22 15:44:19 crc kubenswrapper[4825]: I0122 15:44:19.116461 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6p27" event={"ID":"ffcfdefe-f831-469c-9423-6cd4399435a7","Type":"ContainerDied","Data":"e18cf9e77c5bc94544ef3c1a26b9abea5ff78a0a040ad43c4d195ff82daa2664"} Jan 22 15:44:22 crc kubenswrapper[4825]: I0122 15:44:22.153279 4825 generic.go:334] "Generic (PLEG): container finished" podID="fbc53412-3b60-473b-a918-df5872264c8e" containerID="d474c70c077c3c1a66580d8ea3df936ae81927c8b3bd1312fa5f979e46fb0e71" exitCode=0 Jan 22 15:44:22 crc kubenswrapper[4825]: I0122 15:44:22.153357 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-72pdg" event={"ID":"fbc53412-3b60-473b-a918-df5872264c8e","Type":"ContainerDied","Data":"d474c70c077c3c1a66580d8ea3df936ae81927c8b3bd1312fa5f979e46fb0e71"} Jan 22 15:44:24 crc kubenswrapper[4825]: I0122 15:44:24.088632 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: i/o timeout" Jan 22 15:44:27 crc kubenswrapper[4825]: E0122 15:44:27.371861 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 22 15:44:27 crc kubenswrapper[4825]: E0122 15:44:27.372486 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54bh549h6ch665h586h6dh67h64ch5bdh59bh564h5f9h57chc8h55h664h667hf6h547hc9h5dbh666h587hddh88h5b7h545h67h555h6h59bhfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dckz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e224d84c-5d76-451a-9c81-bdf42336c375): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.374569 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.800545 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.811536 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-72pdg" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.826315 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6p27" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.909800 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-combined-ca-bundle\") pod \"ffcfdefe-f831-469c-9423-6cd4399435a7\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.909853 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-config\") pod \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.909931 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df7p8\" (UniqueName: \"kubernetes.io/projected/ffcfdefe-f831-469c-9423-6cd4399435a7-kube-api-access-df7p8\") pod \"ffcfdefe-f831-469c-9423-6cd4399435a7\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.909960 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-config-data\") pod \"ffcfdefe-f831-469c-9423-6cd4399435a7\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910024 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n84pg\" (UniqueName: \"kubernetes.io/projected/fbc53412-3b60-473b-a918-df5872264c8e-kube-api-access-n84pg\") pod \"fbc53412-3b60-473b-a918-df5872264c8e\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910046 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-db-sync-config-data\") pod \"ffcfdefe-f831-469c-9423-6cd4399435a7\" (UID: \"ffcfdefe-f831-469c-9423-6cd4399435a7\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910099 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-sb\") pod \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910157 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w76vn\" (UniqueName: \"kubernetes.io/projected/5f53b9a1-c843-4044-a465-0a8ea7734c1f-kube-api-access-w76vn\") pod \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910230 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-dns-svc\") pod \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910259 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-nb\") pod \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\" (UID: \"5f53b9a1-c843-4044-a465-0a8ea7734c1f\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910284 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-combined-ca-bundle\") pod \"fbc53412-3b60-473b-a918-df5872264c8e\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.910300 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-config\") pod \"fbc53412-3b60-473b-a918-df5872264c8e\" (UID: \"fbc53412-3b60-473b-a918-df5872264c8e\") " Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.917462 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcfdefe-f831-469c-9423-6cd4399435a7-kube-api-access-df7p8" (OuterVolumeSpecName: "kube-api-access-df7p8") pod "ffcfdefe-f831-469c-9423-6cd4399435a7" (UID: "ffcfdefe-f831-469c-9423-6cd4399435a7"). InnerVolumeSpecName "kube-api-access-df7p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.932279 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f53b9a1-c843-4044-a465-0a8ea7734c1f-kube-api-access-w76vn" (OuterVolumeSpecName: "kube-api-access-w76vn") pod "5f53b9a1-c843-4044-a465-0a8ea7734c1f" (UID: "5f53b9a1-c843-4044-a465-0a8ea7734c1f"). InnerVolumeSpecName "kube-api-access-w76vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.944774 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc53412-3b60-473b-a918-df5872264c8e-kube-api-access-n84pg" (OuterVolumeSpecName: "kube-api-access-n84pg") pod "fbc53412-3b60-473b-a918-df5872264c8e" (UID: "fbc53412-3b60-473b-a918-df5872264c8e"). InnerVolumeSpecName "kube-api-access-n84pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.955702 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ffcfdefe-f831-469c-9423-6cd4399435a7" (UID: "ffcfdefe-f831-469c-9423-6cd4399435a7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:27 crc kubenswrapper[4825]: I0122 15:44:27.996410 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbc53412-3b60-473b-a918-df5872264c8e" (UID: "fbc53412-3b60-473b-a918-df5872264c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.002869 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-config" (OuterVolumeSpecName: "config") pod "fbc53412-3b60-473b-a918-df5872264c8e" (UID: "fbc53412-3b60-473b-a918-df5872264c8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.004081 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffcfdefe-f831-469c-9423-6cd4399435a7" (UID: "ffcfdefe-f831-469c-9423-6cd4399435a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013250 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w76vn\" (UniqueName: \"kubernetes.io/projected/5f53b9a1-c843-4044-a465-0a8ea7734c1f-kube-api-access-w76vn\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013286 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013296 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fbc53412-3b60-473b-a918-df5872264c8e-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013306 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013315 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df7p8\" (UniqueName: \"kubernetes.io/projected/ffcfdefe-f831-469c-9423-6cd4399435a7-kube-api-access-df7p8\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013324 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n84pg\" (UniqueName: \"kubernetes.io/projected/fbc53412-3b60-473b-a918-df5872264c8e-kube-api-access-n84pg\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.013332 4825 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.016505 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f53b9a1-c843-4044-a465-0a8ea7734c1f" (UID: "5f53b9a1-c843-4044-a465-0a8ea7734c1f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.017586 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f53b9a1-c843-4044-a465-0a8ea7734c1f" (UID: "5f53b9a1-c843-4044-a465-0a8ea7734c1f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.019789 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-config" (OuterVolumeSpecName: "config") pod "5f53b9a1-c843-4044-a465-0a8ea7734c1f" (UID: "5f53b9a1-c843-4044-a465-0a8ea7734c1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.025397 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-config-data" (OuterVolumeSpecName: "config-data") pod "ffcfdefe-f831-469c-9423-6cd4399435a7" (UID: "ffcfdefe-f831-469c-9423-6cd4399435a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.044486 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f53b9a1-c843-4044-a465-0a8ea7734c1f" (UID: "5f53b9a1-c843-4044-a465-0a8ea7734c1f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.115198 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.115230 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.115243 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.115251 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcfdefe-f831-469c-9423-6cd4399435a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.115261 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f53b9a1-c843-4044-a465-0a8ea7734c1f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.214367 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" event={"ID":"5f53b9a1-c843-4044-a465-0a8ea7734c1f","Type":"ContainerDied","Data":"26faf806db4118b619f075717942b2814ec9a6040ce5a18231b0526f9250bccb"} Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.214451 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.221915 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-72pdg" event={"ID":"fbc53412-3b60-473b-a918-df5872264c8e","Type":"ContainerDied","Data":"29163a2ac99e6a14fb61dfc7e0a039405fcf0b358cd68d1043cedeb7a10ebb3d"} Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.221945 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29163a2ac99e6a14fb61dfc7e0a039405fcf0b358cd68d1043cedeb7a10ebb3d" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.222007 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-72pdg" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.223730 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r6p27" event={"ID":"ffcfdefe-f831-469c-9423-6cd4399435a7","Type":"ContainerDied","Data":"77f238a8fe20ec7597c8bba005d39cce5b7a78a3fd00a2f47d9b373d339737a1"} Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.223763 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f238a8fe20ec7597c8bba005d39cce5b7a78a3fd00a2f47d9b373d339737a1" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.223810 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r6p27" Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.255910 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7zl97"] Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.264128 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7zl97"] Jan 22 15:44:28 crc kubenswrapper[4825]: I0122 15:44:28.843470 4825 scope.go:117] "RemoveContainer" containerID="412dd9be81ce6526e59f86ae0acb43dd1cf93de6048b7c0c1fe775ac78326289" Jan 22 15:44:28 crc kubenswrapper[4825]: E0122 15:44:28.861510 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 22 15:44:28 crc kubenswrapper[4825]: E0122 15:44:28.861743 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dspld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-54vjv_openstack(7211decb-e02d-47e6-9ea7-493e8e6a3743): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:44:28 crc kubenswrapper[4825]: E0122 15:44:28.862885 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-54vjv" podUID="7211decb-e02d-47e6-9ea7-493e8e6a3743" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.109191 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-7zl97" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: i/o timeout" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.262080 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-55djf"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.331360 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"667e755d-b6f5-4280-9640-a7a893684b7f","Type":"ContainerStarted","Data":"9a33fff1445f0ed64e810b40970f8e4832663409e5546524676a6d7f6ea22952"} Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.361024 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-5x4lj"] Jan 22 15:44:29 crc kubenswrapper[4825]: E0122 15:44:29.361498 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcfdefe-f831-469c-9423-6cd4399435a7" containerName="glance-db-sync" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.361516 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcfdefe-f831-469c-9423-6cd4399435a7" containerName="glance-db-sync" Jan 22 15:44:29 crc kubenswrapper[4825]: E0122 15:44:29.361542 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc53412-3b60-473b-a918-df5872264c8e" containerName="neutron-db-sync" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.361548 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc53412-3b60-473b-a918-df5872264c8e" containerName="neutron-db-sync" Jan 22 15:44:29 crc kubenswrapper[4825]: E0122 15:44:29.361563 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.361569 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" Jan 22 15:44:29 crc kubenswrapper[4825]: E0122 15:44:29.361582 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="init" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.361589 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="init" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.365285 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc53412-3b60-473b-a918-df5872264c8e" containerName="neutron-db-sync" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.365322 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcfdefe-f831-469c-9423-6cd4399435a7" containerName="glance-db-sync" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.365338 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" containerName="dnsmasq-dns" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.367869 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: E0122 15:44:29.388857 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-54vjv" podUID="7211decb-e02d-47e6-9ea7-493e8e6a3743" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.405761 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6845d75bcd-cxzv6"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.407874 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.414070 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.414291 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-958wc" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.414287 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.414424 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.429050 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-5x4lj"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.459396 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6845d75bcd-cxzv6"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.465934 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=38.465909356 podStartE2EDuration="38.465909356s" podCreationTimestamp="2026-01-22 15:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:29.388531675 +0000 UTC m=+1216.150058585" watchObservedRunningTime="2026-01-22 15:44:29.465909356 +0000 UTC m=+1216.227436266" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.506874 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kwfvj"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.543104 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f53b9a1-c843-4044-a465-0a8ea7734c1f" path="/var/lib/kubelet/pods/5f53b9a1-c843-4044-a465-0a8ea7734c1f/volumes" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.548502 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-5x4lj"] Jan 22 15:44:29 crc kubenswrapper[4825]: E0122 15:44:29.549210 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-l982v ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" podUID="ea787fac-b330-4b7b-b3ee-f9230933bc6d" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.554023 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-config\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.554083 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-httpd-config\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.554109 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-combined-ca-bundle\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.554150 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhplj\" (UniqueName: \"kubernetes.io/projected/eacf9923-7898-4237-a615-e2c8de47d3cb-kube-api-access-vhplj\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.555270 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.555864 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-config\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.556038 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l982v\" (UniqueName: \"kubernetes.io/projected/ea787fac-b330-4b7b-b3ee-f9230933bc6d-kube-api-access-l982v\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.558633 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-ovndb-tls-certs\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.558706 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.558743 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.558780 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.585930 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gm5qw"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.591509 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.608529 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gm5qw"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.660716 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-ovndb-tls-certs\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.660777 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.660800 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.660837 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.660912 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-config\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.660963 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-httpd-config\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.661058 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-combined-ca-bundle\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.661116 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhplj\" (UniqueName: \"kubernetes.io/projected/eacf9923-7898-4237-a615-e2c8de47d3cb-kube-api-access-vhplj\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.661184 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.661213 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-config\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.661237 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l982v\" (UniqueName: \"kubernetes.io/projected/ea787fac-b330-4b7b-b3ee-f9230933bc6d-kube-api-access-l982v\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.665078 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.667317 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.667245 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.680750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-config\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.681445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.684247 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-55djf"] Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.689257 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-ovndb-tls-certs\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.691789 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-combined-ca-bundle\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.692505 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-config\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.693563 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-httpd-config\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.697205 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhplj\" (UniqueName: \"kubernetes.io/projected/eacf9923-7898-4237-a615-e2c8de47d3cb-kube-api-access-vhplj\") pod \"neutron-6845d75bcd-cxzv6\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.697794 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l982v\" (UniqueName: \"kubernetes.io/projected/ea787fac-b330-4b7b-b3ee-f9230933bc6d-kube-api-access-l982v\") pod \"dnsmasq-dns-79cd4f6685-5x4lj\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.764338 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.764404 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s9zr\" (UniqueName: \"kubernetes.io/projected/3440c5ee-21a0-480d-8960-0d60146517cb-kube-api-access-9s9zr\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.764504 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.764574 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.764609 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.764662 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-config\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.771570 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.866998 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.867059 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.867111 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-config\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.867213 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.867246 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s9zr\" (UniqueName: \"kubernetes.io/projected/3440c5ee-21a0-480d-8960-0d60146517cb-kube-api-access-9s9zr\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.867316 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.868312 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.868935 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.869764 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-config\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.869844 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.870827 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.898970 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s9zr\" (UniqueName: \"kubernetes.io/projected/3440c5ee-21a0-480d-8960-0d60146517cb-kube-api-access-9s9zr\") pod \"dnsmasq-dns-6b7b667979-gm5qw\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:29 crc kubenswrapper[4825]: I0122 15:44:29.931619 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.424081 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.652015 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.654137 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.660556 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.660731 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qhjbr" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.660847 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.687245 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.719942 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.752495 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlfj\" (UniqueName: \"kubernetes.io/projected/06529f6f-025e-4d9a-bf22-769bfb00d3da-kube-api-access-pmlfj\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.785727 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-config-data\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.786103 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-logs\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.786296 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-scripts\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.786913 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.787323 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.787445 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.795970 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.797575 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.802878 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.855730 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.890974 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-config\") pod \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.892052 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l982v\" (UniqueName: \"kubernetes.io/projected/ea787fac-b330-4b7b-b3ee-f9230933bc6d-kube-api-access-l982v\") pod \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.892923 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-nb\") pod \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.893136 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-sb\") pod \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.893408 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-swift-storage-0\") pod \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.893744 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-svc\") pod \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\" (UID: \"ea787fac-b330-4b7b-b3ee-f9230933bc6d\") " Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895495 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-scripts\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895557 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895613 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895656 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895728 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895809 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895848 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895892 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.895947 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896004 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896031 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrrt\" (UniqueName: \"kubernetes.io/projected/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-kube-api-access-jxrrt\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896054 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlfj\" (UniqueName: \"kubernetes.io/projected/06529f6f-025e-4d9a-bf22-769bfb00d3da-kube-api-access-pmlfj\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896087 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-config-data\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896113 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-logs\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.892083 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-config" (OuterVolumeSpecName: "config") pod "ea787fac-b330-4b7b-b3ee-f9230933bc6d" (UID: "ea787fac-b330-4b7b-b3ee-f9230933bc6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.894846 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea787fac-b330-4b7b-b3ee-f9230933bc6d" (UID: "ea787fac-b330-4b7b-b3ee-f9230933bc6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896522 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea787fac-b330-4b7b-b3ee-f9230933bc6d" (UID: "ea787fac-b330-4b7b-b3ee-f9230933bc6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.896771 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea787fac-b330-4b7b-b3ee-f9230933bc6d" (UID: "ea787fac-b330-4b7b-b3ee-f9230933bc6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.897762 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea787fac-b330-4b7b-b3ee-f9230933bc6d" (UID: "ea787fac-b330-4b7b-b3ee-f9230933bc6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.899653 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-logs\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.903191 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.905339 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea787fac-b330-4b7b-b3ee-f9230933bc6d-kube-api-access-l982v" (OuterVolumeSpecName: "kube-api-access-l982v") pod "ea787fac-b330-4b7b-b3ee-f9230933bc6d" (UID: "ea787fac-b330-4b7b-b3ee-f9230933bc6d"). InnerVolumeSpecName "kube-api-access-l982v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.908235 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.908281 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2eba76fcf8acb10fcd1d5de55fcc46feaa499f2cf7d93b353c025f405bcc2f19/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.936056 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.936175 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-scripts\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.937322 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-config-data\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.941694 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlfj\" (UniqueName: \"kubernetes.io/projected/06529f6f-025e-4d9a-bf22-769bfb00d3da-kube-api-access-pmlfj\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:30 crc kubenswrapper[4825]: I0122 15:44:30.982298 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006285 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006432 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006501 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006531 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxrrt\" (UniqueName: \"kubernetes.io/projected/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-kube-api-access-jxrrt\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006689 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006730 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006787 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006891 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006908 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006917 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006925 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l982v\" (UniqueName: \"kubernetes.io/projected/ea787fac-b330-4b7b-b3ee-f9230933bc6d-kube-api-access-l982v\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006935 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.006945 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea787fac-b330-4b7b-b3ee-f9230933bc6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.007766 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.008481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.154717 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.162803 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.163713 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.167692 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.167786 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7722be3f00b9a9940eea4c247f06e25a83500c17d2f465a46607559e6e786615/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.173636 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.175126 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxrrt\" (UniqueName: \"kubernetes.io/projected/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-kube-api-access-jxrrt\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.211759 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.431046 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-5x4lj" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.481055 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.529858 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-5x4lj"] Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.529893 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-5x4lj"] Jan 22 15:44:31 crc kubenswrapper[4825]: I0122 15:44:31.881738 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 15:44:33 crc kubenswrapper[4825]: I0122 15:44:33.535942 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea787fac-b330-4b7b-b3ee-f9230933bc6d" path="/var/lib/kubelet/pods/ea787fac-b330-4b7b-b3ee-f9230933bc6d/volumes" Jan 22 15:44:34 crc kubenswrapper[4825]: I0122 15:44:34.203861 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:34 crc kubenswrapper[4825]: I0122 15:44:34.356838 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.684147 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54b58d9b7-q6gvq"] Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.686004 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.689164 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.689610 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.696922 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54b58d9b7-q6gvq"] Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.872486 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-public-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.872548 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bsnh\" (UniqueName: \"kubernetes.io/projected/db48ac29-2967-41a0-9512-9317757070a9-kube-api-access-7bsnh\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.872710 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-combined-ca-bundle\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.872919 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-config\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.873050 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-httpd-config\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.873145 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-internal-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.873269 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-ovndb-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.976263 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-ovndb-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.976378 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-public-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.976426 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bsnh\" (UniqueName: \"kubernetes.io/projected/db48ac29-2967-41a0-9512-9317757070a9-kube-api-access-7bsnh\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.976526 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-combined-ca-bundle\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.976794 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-config\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.977301 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-httpd-config\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.977382 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-internal-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.983191 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-combined-ca-bundle\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.983538 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-httpd-config\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.985321 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-public-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.985789 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-internal-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:35 crc kubenswrapper[4825]: I0122 15:44:35.986583 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-ovndb-tls-certs\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:36 crc kubenswrapper[4825]: I0122 15:44:35.997348 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-config\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:36 crc kubenswrapper[4825]: I0122 15:44:35.999147 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bsnh\" (UniqueName: \"kubernetes.io/projected/db48ac29-2967-41a0-9512-9317757070a9-kube-api-access-7bsnh\") pod \"neutron-54b58d9b7-q6gvq\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:36 crc kubenswrapper[4825]: I0122 15:44:36.031785 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:36 crc kubenswrapper[4825]: I0122 15:44:36.877150 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 15:44:36 crc kubenswrapper[4825]: I0122 15:44:36.884173 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 15:44:37 crc kubenswrapper[4825]: I0122 15:44:37.064674 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 15:44:38 crc kubenswrapper[4825]: W0122 15:44:38.249288 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ca6d27_7a63_4ba6_8baf_8180c79cd810.slice/crio-0b70ff5c320f4d247804f147af28239c72702199beeb72a5a2f06dfb71861a15 WatchSource:0}: Error finding container 0b70ff5c320f4d247804f147af28239c72702199beeb72a5a2f06dfb71861a15: Status 404 returned error can't find the container with id 0b70ff5c320f4d247804f147af28239c72702199beeb72a5a2f06dfb71861a15 Jan 22 15:44:38 crc kubenswrapper[4825]: I0122 15:44:38.884396 4825 scope.go:117] "RemoveContainer" containerID="b9f48ef41b6562a93718b9d93f3b48d21018d719d3dbb0f2cedb5ad10af69c4e" Jan 22 15:44:38 crc kubenswrapper[4825]: E0122 15:44:38.894578 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 22 15:44:38 crc kubenswrapper[4825]: E0122 15:44:38.894623 4825 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 22 15:44:38 crc kubenswrapper[4825]: E0122 15:44:38.894768 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mst8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-cshtw_openstack(52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:44:38 crc kubenswrapper[4825]: E0122 15:44:38.896201 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-cshtw" podUID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" Jan 22 15:44:39 crc kubenswrapper[4825]: I0122 15:44:39.092686 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"4f117d8aef866860d54f3d492ab55e9d654f82ddf841344db75dba9d26403f13"} Jan 22 15:44:39 crc kubenswrapper[4825]: I0122 15:44:39.094552 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwfvj" event={"ID":"b3ca6d27-7a63-4ba6-8baf-8180c79cd810","Type":"ContainerStarted","Data":"0b70ff5c320f4d247804f147af28239c72702199beeb72a5a2f06dfb71861a15"} Jan 22 15:44:39 crc kubenswrapper[4825]: I0122 15:44:39.096838 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-55djf" event={"ID":"0899ccaa-6936-4c34-92d3-e579cb0f0bea","Type":"ContainerStarted","Data":"18833d81bb45127f7bcffa60dbb584e71d5b7dbff2e994602133dd8d7a4a9814"} Jan 22 15:44:39 crc kubenswrapper[4825]: E0122 15:44:39.100302 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-cshtw" podUID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" Jan 22 15:44:39 crc kubenswrapper[4825]: I0122 15:44:39.502050 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:39 crc kubenswrapper[4825]: I0122 15:44:39.841610 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:39 crc kubenswrapper[4825]: I0122 15:44:39.913270 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gm5qw"] Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.004360 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6845d75bcd-cxzv6"] Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.118940 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwfvj" event={"ID":"b3ca6d27-7a63-4ba6-8baf-8180c79cd810","Type":"ContainerStarted","Data":"da8359e16e8b3afdd697e650baf9a6e11b55dbb0ad4162ee3c3de07649000b41"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.138481 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerStarted","Data":"9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.140762 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kwfvj" podStartSLOduration=29.140745754 podStartE2EDuration="29.140745754s" podCreationTimestamp="2026-01-22 15:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:40.140458165 +0000 UTC m=+1226.901985085" watchObservedRunningTime="2026-01-22 15:44:40.140745754 +0000 UTC m=+1226.902272664" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.149216 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" event={"ID":"3440c5ee-21a0-480d-8960-0d60146517cb","Type":"ContainerStarted","Data":"c3030df792bd39e249a7a1c96f9e0e33b969f76523637226abfe40bdafc3765d"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.151197 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514","Type":"ContainerStarted","Data":"841a681270859e6d2750e521092ce8e4d06e0161ac5f04779479ffea607c9956"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.157555 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fxh4" event={"ID":"589c7924-baff-443f-b923-59a1348c709a","Type":"ContainerStarted","Data":"1ce120a81de5582340734a40599a7e8e65c947a0c732dfa76d07436a8232dab5"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.161996 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06529f6f-025e-4d9a-bf22-769bfb00d3da","Type":"ContainerStarted","Data":"af39a34403f6216d047a81a2e709675635b82746c1d21fa7066bb0d43709c2e2"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.163849 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845d75bcd-cxzv6" event={"ID":"eacf9923-7898-4237-a615-e2c8de47d3cb","Type":"ContainerStarted","Data":"fdec62d8abdbe6392684a2bc56b0bcfefb5c50374f06ccf1ad9ccfc9cc33ef8c"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.165683 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8c8z" event={"ID":"d2480086-9709-4e61-af71-042055623d32","Type":"ContainerStarted","Data":"0b7199d4207aec152ef2e116aa6cbce9b1ea29c3a44c091ea6c89c8c47195b96"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.177563 4825 generic.go:334] "Generic (PLEG): container finished" podID="0899ccaa-6936-4c34-92d3-e579cb0f0bea" containerID="294ecac102f889dfdc2395f3338ed519e09b83b1bf0197e215bea7d9d1c53f5c" exitCode=0 Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.177734 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-55djf" event={"ID":"0899ccaa-6936-4c34-92d3-e579cb0f0bea","Type":"ContainerDied","Data":"294ecac102f889dfdc2395f3338ed519e09b83b1bf0197e215bea7d9d1c53f5c"} Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.189123 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54b58d9b7-q6gvq"] Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.191189 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6fxh4" podStartSLOduration=14.08018227 podStartE2EDuration="42.191166481s" podCreationTimestamp="2026-01-22 15:43:58 +0000 UTC" firstStartedPulling="2026-01-22 15:44:00.749785387 +0000 UTC m=+1187.511312297" lastFinishedPulling="2026-01-22 15:44:28.860769598 +0000 UTC m=+1215.622296508" observedRunningTime="2026-01-22 15:44:40.175642525 +0000 UTC m=+1226.937169435" watchObservedRunningTime="2026-01-22 15:44:40.191166481 +0000 UTC m=+1226.952693391" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.221105 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t8c8z" podStartSLOduration=3.698009093 podStartE2EDuration="42.22108801s" podCreationTimestamp="2026-01-22 15:43:58 +0000 UTC" firstStartedPulling="2026-01-22 15:44:00.819990172 +0000 UTC m=+1187.581517082" lastFinishedPulling="2026-01-22 15:44:39.343069089 +0000 UTC m=+1226.104595999" observedRunningTime="2026-01-22 15:44:40.19428086 +0000 UTC m=+1226.955807770" watchObservedRunningTime="2026-01-22 15:44:40.22108801 +0000 UTC m=+1226.982614910" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.688997 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.797493 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-nb\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.797840 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf52c\" (UniqueName: \"kubernetes.io/projected/0899ccaa-6936-4c34-92d3-e579cb0f0bea-kube-api-access-hf52c\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.797864 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-swift-storage-0\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.797932 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-config\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.797962 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-sb\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.798035 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.812498 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0899ccaa-6936-4c34-92d3-e579cb0f0bea-kube-api-access-hf52c" (OuterVolumeSpecName: "kube-api-access-hf52c") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea"). InnerVolumeSpecName "kube-api-access-hf52c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.897407 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.899764 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.899787 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf52c\" (UniqueName: \"kubernetes.io/projected/0899ccaa-6936-4c34-92d3-e579cb0f0bea-kube-api-access-hf52c\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.913136 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-config" (OuterVolumeSpecName: "config") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.938797 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:40 crc kubenswrapper[4825]: E0122 15:44:40.943814 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc podName:0899ccaa-6936-4c34-92d3-e579cb0f0bea nodeName:}" failed. No retries permitted until 2026-01-22 15:44:41.443778912 +0000 UTC m=+1228.205305822 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea") : error deleting /var/lib/kubelet/pods/0899ccaa-6936-4c34-92d3-e579cb0f0bea/volume-subpaths: remove /var/lib/kubelet/pods/0899ccaa-6936-4c34-92d3-e579cb0f0bea/volume-subpaths: no such file or directory Jan 22 15:44:40 crc kubenswrapper[4825]: I0122 15:44:40.944109 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.002444 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.002654 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.002667 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.211307 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-55djf" event={"ID":"0899ccaa-6936-4c34-92d3-e579cb0f0bea","Type":"ContainerDied","Data":"18833d81bb45127f7bcffa60dbb584e71d5b7dbff2e994602133dd8d7a4a9814"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.211555 4825 scope.go:117] "RemoveContainer" containerID="294ecac102f889dfdc2395f3338ed519e09b83b1bf0197e215bea7d9d1c53f5c" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.211561 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.223494 4825 generic.go:334] "Generic (PLEG): container finished" podID="3440c5ee-21a0-480d-8960-0d60146517cb" containerID="89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82" exitCode=0 Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.223565 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" event={"ID":"3440c5ee-21a0-480d-8960-0d60146517cb","Type":"ContainerDied","Data":"89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.236040 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514","Type":"ContainerStarted","Data":"e7d8326f0f4831a3cd358f16c087a3ab29ac903bd6647db34a8e2d8e2809fedc"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.240005 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54b58d9b7-q6gvq" event={"ID":"db48ac29-2967-41a0-9512-9317757070a9","Type":"ContainerStarted","Data":"7a439a656d03eea0e003c2fffeeb76bcdcabdadc037f986b8c6752f41bae7a0a"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.240057 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54b58d9b7-q6gvq" event={"ID":"db48ac29-2967-41a0-9512-9317757070a9","Type":"ContainerStarted","Data":"d156b65a89555f98bf8601f74b9f589e67a4c95a8ad5aa398fa60515ac17944e"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.240165 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.272012 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06529f6f-025e-4d9a-bf22-769bfb00d3da","Type":"ContainerStarted","Data":"abc365e319308244b3b23ab2936d4329731048a822dcf6a5524adfd4bad5daad"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.286691 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54b58d9b7-q6gvq" podStartSLOduration=6.286670713 podStartE2EDuration="6.286670713s" podCreationTimestamp="2026-01-22 15:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:41.261556892 +0000 UTC m=+1228.023083802" watchObservedRunningTime="2026-01-22 15:44:41.286670713 +0000 UTC m=+1228.048197623" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.288461 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845d75bcd-cxzv6" event={"ID":"eacf9923-7898-4237-a615-e2c8de47d3cb","Type":"ContainerStarted","Data":"da875616181706ec7e9ab0aed3b861b38d1642bddba76527b4b01b520f7f8448"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.288613 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845d75bcd-cxzv6" event={"ID":"eacf9923-7898-4237-a615-e2c8de47d3cb","Type":"ContainerStarted","Data":"89747f21a9dcbcda733ed4794c1a788daa54e6e4dea6db06fed567992fcc5d69"} Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.288636 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.324283 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6845d75bcd-cxzv6" podStartSLOduration=12.324263602 podStartE2EDuration="12.324263602s" podCreationTimestamp="2026-01-22 15:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:41.312522755 +0000 UTC m=+1228.074049665" watchObservedRunningTime="2026-01-22 15:44:41.324263602 +0000 UTC m=+1228.085790512" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.445050 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc\") pod \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\" (UID: \"0899ccaa-6936-4c34-92d3-e579cb0f0bea\") " Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.445657 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0899ccaa-6936-4c34-92d3-e579cb0f0bea" (UID: "0899ccaa-6936-4c34-92d3-e579cb0f0bea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:41 crc kubenswrapper[4825]: I0122 15:44:41.548495 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0899ccaa-6936-4c34-92d3-e579cb0f0bea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.298929 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54b58d9b7-q6gvq" event={"ID":"db48ac29-2967-41a0-9512-9317757070a9","Type":"ContainerStarted","Data":"84422cb4707968ad06ffc4146cc9027256b643f7fd3fc6324392e02e1bcdf2e0"} Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.312708 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06529f6f-025e-4d9a-bf22-769bfb00d3da","Type":"ContainerStarted","Data":"f377d7715091a1627ecd69fb9277a8ebb5fd3f7da1d0c5bfe48459190d397f6b"} Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.312810 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-log" containerID="cri-o://abc365e319308244b3b23ab2936d4329731048a822dcf6a5524adfd4bad5daad" gracePeriod=30 Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.312962 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-httpd" containerID="cri-o://f377d7715091a1627ecd69fb9277a8ebb5fd3f7da1d0c5bfe48459190d397f6b" gracePeriod=30 Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.318918 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" event={"ID":"3440c5ee-21a0-480d-8960-0d60146517cb","Type":"ContainerStarted","Data":"cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523"} Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.318996 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.330795 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514","Type":"ContainerStarted","Data":"7378987c177af45b6fce690c908b371774248cc6b3ba7ad1b8861056fdd2699b"} Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.330864 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-log" containerID="cri-o://e7d8326f0f4831a3cd358f16c087a3ab29ac903bd6647db34a8e2d8e2809fedc" gracePeriod=30 Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.330912 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-httpd" containerID="cri-o://7378987c177af45b6fce690c908b371774248cc6b3ba7ad1b8861056fdd2699b" gracePeriod=30 Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.349453 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.349426966 podStartE2EDuration="13.349426966s" podCreationTimestamp="2026-01-22 15:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:42.340803988 +0000 UTC m=+1229.102330908" watchObservedRunningTime="2026-01-22 15:44:42.349426966 +0000 UTC m=+1229.110953886" Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.373877 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.373855466 podStartE2EDuration="13.373855466s" podCreationTimestamp="2026-01-22 15:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:42.370321215 +0000 UTC m=+1229.131848135" watchObservedRunningTime="2026-01-22 15:44:42.373855466 +0000 UTC m=+1229.135382376" Jan 22 15:44:42 crc kubenswrapper[4825]: I0122 15:44:42.407671 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" podStartSLOduration=13.407644196 podStartE2EDuration="13.407644196s" podCreationTimestamp="2026-01-22 15:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:42.397897566 +0000 UTC m=+1229.159424486" watchObservedRunningTime="2026-01-22 15:44:42.407644196 +0000 UTC m=+1229.169171106" Jan 22 15:44:43 crc kubenswrapper[4825]: I0122 15:44:43.364616 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-54vjv" event={"ID":"7211decb-e02d-47e6-9ea7-493e8e6a3743","Type":"ContainerStarted","Data":"2e6b7f886ffb9b325b3310026c9c248243d9843bf701112f7ebfb5473836de6e"} Jan 22 15:44:43 crc kubenswrapper[4825]: I0122 15:44:43.370430 4825 generic.go:334] "Generic (PLEG): container finished" podID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerID="abc365e319308244b3b23ab2936d4329731048a822dcf6a5524adfd4bad5daad" exitCode=143 Jan 22 15:44:43 crc kubenswrapper[4825]: I0122 15:44:43.370596 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06529f6f-025e-4d9a-bf22-769bfb00d3da","Type":"ContainerDied","Data":"abc365e319308244b3b23ab2936d4329731048a822dcf6a5524adfd4bad5daad"} Jan 22 15:44:43 crc kubenswrapper[4825]: I0122 15:44:43.373094 4825 generic.go:334] "Generic (PLEG): container finished" podID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerID="e7d8326f0f4831a3cd358f16c087a3ab29ac903bd6647db34a8e2d8e2809fedc" exitCode=143 Jan 22 15:44:43 crc kubenswrapper[4825]: I0122 15:44:43.373228 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514","Type":"ContainerDied","Data":"e7d8326f0f4831a3cd358f16c087a3ab29ac903bd6647db34a8e2d8e2809fedc"} Jan 22 15:44:43 crc kubenswrapper[4825]: I0122 15:44:43.392582 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-54vjv" podStartSLOduration=4.859013004 podStartE2EDuration="45.392563444s" podCreationTimestamp="2026-01-22 15:43:58 +0000 UTC" firstStartedPulling="2026-01-22 15:44:00.766353243 +0000 UTC m=+1187.527880163" lastFinishedPulling="2026-01-22 15:44:41.299903693 +0000 UTC m=+1228.061430603" observedRunningTime="2026-01-22 15:44:43.390196666 +0000 UTC m=+1230.151723576" watchObservedRunningTime="2026-01-22 15:44:43.392563444 +0000 UTC m=+1230.154090354" Jan 22 15:44:44 crc kubenswrapper[4825]: I0122 15:44:44.388583 4825 generic.go:334] "Generic (PLEG): container finished" podID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerID="f377d7715091a1627ecd69fb9277a8ebb5fd3f7da1d0c5bfe48459190d397f6b" exitCode=0 Jan 22 15:44:44 crc kubenswrapper[4825]: I0122 15:44:44.389189 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06529f6f-025e-4d9a-bf22-769bfb00d3da","Type":"ContainerDied","Data":"f377d7715091a1627ecd69fb9277a8ebb5fd3f7da1d0c5bfe48459190d397f6b"} Jan 22 15:44:44 crc kubenswrapper[4825]: I0122 15:44:44.399211 4825 generic.go:334] "Generic (PLEG): container finished" podID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerID="7378987c177af45b6fce690c908b371774248cc6b3ba7ad1b8861056fdd2699b" exitCode=0 Jan 22 15:44:44 crc kubenswrapper[4825]: I0122 15:44:44.399257 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514","Type":"ContainerDied","Data":"7378987c177af45b6fce690c908b371774248cc6b3ba7ad1b8861056fdd2699b"} Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.114883 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.200659 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-httpd-run\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201305 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201325 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201361 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrrt\" (UniqueName: \"kubernetes.io/projected/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-kube-api-access-jxrrt\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201509 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-scripts\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201618 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-logs\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201650 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-combined-ca-bundle\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.201747 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-config-data\") pod \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\" (UID: \"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.202952 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-logs" (OuterVolumeSpecName: "logs") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.202947 4825 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.207146 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-kube-api-access-jxrrt" (OuterVolumeSpecName: "kube-api-access-jxrrt") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "kube-api-access-jxrrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.221128 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-scripts" (OuterVolumeSpecName: "scripts") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.254381 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e" (OuterVolumeSpecName: "glance") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "pvc-4e49a725-3f57-44a5-bfa8-df35534f326e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.288095 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.304938 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") on node \"crc\" " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.305004 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxrrt\" (UniqueName: \"kubernetes.io/projected/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-kube-api-access-jxrrt\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.305267 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.305285 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.305301 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.321759 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-config-data" (OuterVolumeSpecName: "config-data") pod "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" (UID: "d4905b0a-ebe4-43d1-9787-b8fdcd2ae514"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.339820 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.340201 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4e49a725-3f57-44a5-bfa8-df35534f326e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e") on node "crc" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.369450 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.407685 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.408062 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.448514 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4905b0a-ebe4-43d1-9787-b8fdcd2ae514","Type":"ContainerDied","Data":"841a681270859e6d2750e521092ce8e4d06e0161ac5f04779479ffea607c9956"} Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.448580 4825 scope.go:117] "RemoveContainer" containerID="7378987c177af45b6fce690c908b371774248cc6b3ba7ad1b8861056fdd2699b" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.448533 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.453608 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06529f6f-025e-4d9a-bf22-769bfb00d3da","Type":"ContainerDied","Data":"af39a34403f6216d047a81a2e709675635b82746c1d21fa7066bb0d43709c2e2"} Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.453637 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.456264 4825 generic.go:334] "Generic (PLEG): container finished" podID="d2480086-9709-4e61-af71-042055623d32" containerID="0b7199d4207aec152ef2e116aa6cbce9b1ea29c3a44c091ea6c89c8c47195b96" exitCode=0 Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.456319 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8c8z" event={"ID":"d2480086-9709-4e61-af71-042055623d32","Type":"ContainerDied","Data":"0b7199d4207aec152ef2e116aa6cbce9b1ea29c3a44c091ea6c89c8c47195b96"} Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.460234 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerStarted","Data":"02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579"} Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.495917 4825 scope.go:117] "RemoveContainer" containerID="e7d8326f0f4831a3cd358f16c087a3ab29ac903bd6647db34a8e2d8e2809fedc" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.509588 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-logs\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.509904 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-config-data\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.509957 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlfj\" (UniqueName: \"kubernetes.io/projected/06529f6f-025e-4d9a-bf22-769bfb00d3da-kube-api-access-pmlfj\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.510058 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-scripts\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.510205 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.510307 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-httpd-run\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.510384 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-combined-ca-bundle\") pod \"06529f6f-025e-4d9a-bf22-769bfb00d3da\" (UID: \"06529f6f-025e-4d9a-bf22-769bfb00d3da\") " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.513718 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-logs" (OuterVolumeSpecName: "logs") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.514182 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.520689 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-scripts" (OuterVolumeSpecName: "scripts") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.530864 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.533793 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06529f6f-025e-4d9a-bf22-769bfb00d3da-kube-api-access-pmlfj" (OuterVolumeSpecName: "kube-api-access-pmlfj") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "kube-api-access-pmlfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.543880 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.544081 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95" (OuterVolumeSpecName: "glance") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.554962 4825 scope.go:117] "RemoveContainer" containerID="f377d7715091a1627ecd69fb9277a8ebb5fd3f7da1d0c5bfe48459190d397f6b" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.559422 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: E0122 15:44:48.559995 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-log" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560013 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-log" Jan 22 15:44:48 crc kubenswrapper[4825]: E0122 15:44:48.560028 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-httpd" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560034 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-httpd" Jan 22 15:44:48 crc kubenswrapper[4825]: E0122 15:44:48.560058 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-log" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560064 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-log" Jan 22 15:44:48 crc kubenswrapper[4825]: E0122 15:44:48.560079 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-httpd" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560085 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-httpd" Jan 22 15:44:48 crc kubenswrapper[4825]: E0122 15:44:48.560099 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0899ccaa-6936-4c34-92d3-e579cb0f0bea" containerName="init" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560105 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0899ccaa-6936-4c34-92d3-e579cb0f0bea" containerName="init" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560285 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-log" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560296 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-log" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560314 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" containerName="glance-httpd" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560322 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0899ccaa-6936-4c34-92d3-e579cb0f0bea" containerName="init" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.560331 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" containerName="glance-httpd" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.561588 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.563621 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.565428 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.566480 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.566665 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630553 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630621 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630776 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630831 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630862 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630884 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.630960 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qnjb\" (UniqueName: \"kubernetes.io/projected/63da9616-1db2-49a5-8591-9c8bdbbb43a3-kube-api-access-6qnjb\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631086 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631161 4825 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631173 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631183 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06529f6f-025e-4d9a-bf22-769bfb00d3da-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631192 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlfj\" (UniqueName: \"kubernetes.io/projected/06529f6f-025e-4d9a-bf22-769bfb00d3da-kube-api-access-pmlfj\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631202 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.631224 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") on node \"crc\" " Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.643616 4825 scope.go:117] "RemoveContainer" containerID="abc365e319308244b3b23ab2936d4329731048a822dcf6a5524adfd4bad5daad" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.655163 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-config-data" (OuterVolumeSpecName: "config-data") pod "06529f6f-025e-4d9a-bf22-769bfb00d3da" (UID: "06529f6f-025e-4d9a-bf22-769bfb00d3da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.663515 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.663648 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95") on node "crc" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733254 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733349 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733400 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733470 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733556 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qnjb\" (UniqueName: \"kubernetes.io/projected/63da9616-1db2-49a5-8591-9c8bdbbb43a3-kube-api-access-6qnjb\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733594 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733655 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733689 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733792 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06529f6f-025e-4d9a-bf22-769bfb00d3da-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733805 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.733877 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.734473 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.735881 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.735934 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7722be3f00b9a9940eea4c247f06e25a83500c17d2f465a46607559e6e786615/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.738499 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.740613 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.741732 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.744059 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.758366 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qnjb\" (UniqueName: \"kubernetes.io/projected/63da9616-1db2-49a5-8591-9c8bdbbb43a3-kube-api-access-6qnjb\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.782229 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.787500 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.795904 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.811687 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.813401 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.817241 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.817945 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.830937 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.950963 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.954471 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-scripts\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.954689 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-config-data\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.955298 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.955392 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.955434 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-logs\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.955478 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.955527 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmgwb\" (UniqueName: \"kubernetes.io/projected/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-kube-api-access-qmgwb\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:48 crc kubenswrapper[4825]: I0122 15:44:48.955638 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.058396 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.058797 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.058829 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-logs\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.059265 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-logs\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.060806 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.060835 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmgwb\" (UniqueName: \"kubernetes.io/projected/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-kube-api-access-qmgwb\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.060926 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.061079 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.062890 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.062914 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2eba76fcf8acb10fcd1d5de55fcc46feaa499f2cf7d93b353c025f405bcc2f19/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.066500 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-scripts\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.066597 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-config-data\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.075532 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-scripts\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.075753 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.075873 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.078612 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmgwb\" (UniqueName: \"kubernetes.io/projected/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-kube-api-access-qmgwb\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.096709 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-config-data\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.138798 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.217151 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.472251 4825 generic.go:334] "Generic (PLEG): container finished" podID="b3ca6d27-7a63-4ba6-8baf-8180c79cd810" containerID="da8359e16e8b3afdd697e650baf9a6e11b55dbb0ad4162ee3c3de07649000b41" exitCode=0 Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.472545 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwfvj" event={"ID":"b3ca6d27-7a63-4ba6-8baf-8180c79cd810","Type":"ContainerDied","Data":"da8359e16e8b3afdd697e650baf9a6e11b55dbb0ad4162ee3c3de07649000b41"} Jan 22 15:44:49 crc kubenswrapper[4825]: W0122 15:44:49.490643 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63da9616_1db2_49a5_8591_9c8bdbbb43a3.slice/crio-c7d077adda3a5a2948817335b239051015d20878806e0678989c395a0369bbaa WatchSource:0}: Error finding container c7d077adda3a5a2948817335b239051015d20878806e0678989c395a0369bbaa: Status 404 returned error can't find the container with id c7d077adda3a5a2948817335b239051015d20878806e0678989c395a0369bbaa Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.505091 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.531047 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06529f6f-025e-4d9a-bf22-769bfb00d3da" path="/var/lib/kubelet/pods/06529f6f-025e-4d9a-bf22-769bfb00d3da/volumes" Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.532260 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4905b0a-ebe4-43d1-9787-b8fdcd2ae514" path="/var/lib/kubelet/pods/d4905b0a-ebe4-43d1-9787-b8fdcd2ae514/volumes" Jan 22 15:44:49 crc kubenswrapper[4825]: W0122 15:44:49.779535 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e86f1cc_a4dd_4f8f_b9b3_18806405875a.slice/crio-ce5e8a4589d1dccbbff53e439a2daadb44bd9b7499dab0d133cbd995fd47ef27 WatchSource:0}: Error finding container ce5e8a4589d1dccbbff53e439a2daadb44bd9b7499dab0d133cbd995fd47ef27: Status 404 returned error can't find the container with id ce5e8a4589d1dccbbff53e439a2daadb44bd9b7499dab0d133cbd995fd47ef27 Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.783312 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:44:49 crc kubenswrapper[4825]: I0122 15:44:49.989915 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.076252 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bkbj9"] Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.076459 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" podUID="3d53d147-8362-48e9-b525-44249e49ae01" containerName="dnsmasq-dns" containerID="cri-o://dc4e83a0b3ba33d512d64c0bb483c49691ed9e4523e60de0ca68207752301676" gracePeriod=10 Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.190104 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8c8z" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.296690 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2480086-9709-4e61-af71-042055623d32-logs\") pod \"d2480086-9709-4e61-af71-042055623d32\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.298384 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bdgl\" (UniqueName: \"kubernetes.io/projected/d2480086-9709-4e61-af71-042055623d32-kube-api-access-7bdgl\") pod \"d2480086-9709-4e61-af71-042055623d32\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.297007 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2480086-9709-4e61-af71-042055623d32-logs" (OuterVolumeSpecName: "logs") pod "d2480086-9709-4e61-af71-042055623d32" (UID: "d2480086-9709-4e61-af71-042055623d32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.298498 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-scripts\") pod \"d2480086-9709-4e61-af71-042055623d32\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.298627 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-config-data\") pod \"d2480086-9709-4e61-af71-042055623d32\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.298664 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-combined-ca-bundle\") pod \"d2480086-9709-4e61-af71-042055623d32\" (UID: \"d2480086-9709-4e61-af71-042055623d32\") " Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.299483 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2480086-9709-4e61-af71-042055623d32-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.306039 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-scripts" (OuterVolumeSpecName: "scripts") pod "d2480086-9709-4e61-af71-042055623d32" (UID: "d2480086-9709-4e61-af71-042055623d32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.312424 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2480086-9709-4e61-af71-042055623d32-kube-api-access-7bdgl" (OuterVolumeSpecName: "kube-api-access-7bdgl") pod "d2480086-9709-4e61-af71-042055623d32" (UID: "d2480086-9709-4e61-af71-042055623d32"). InnerVolumeSpecName "kube-api-access-7bdgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.622816 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bdgl\" (UniqueName: \"kubernetes.io/projected/d2480086-9709-4e61-af71-042055623d32-kube-api-access-7bdgl\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.622854 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.627689 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2480086-9709-4e61-af71-042055623d32" (UID: "d2480086-9709-4e61-af71-042055623d32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.652098 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63da9616-1db2-49a5-8591-9c8bdbbb43a3","Type":"ContainerStarted","Data":"c7d077adda3a5a2948817335b239051015d20878806e0678989c395a0369bbaa"} Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.674107 4825 generic.go:334] "Generic (PLEG): container finished" podID="3d53d147-8362-48e9-b525-44249e49ae01" containerID="dc4e83a0b3ba33d512d64c0bb483c49691ed9e4523e60de0ca68207752301676" exitCode=0 Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.674258 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" event={"ID":"3d53d147-8362-48e9-b525-44249e49ae01","Type":"ContainerDied","Data":"dc4e83a0b3ba33d512d64c0bb483c49691ed9e4523e60de0ca68207752301676"} Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.688415 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8c8z" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.688512 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8c8z" event={"ID":"d2480086-9709-4e61-af71-042055623d32","Type":"ContainerDied","Data":"babca5fe3ed1533d2c569e8fbb2ffabc2b6f6badd6963c3ee342df6219373b51"} Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.688586 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="babca5fe3ed1533d2c569e8fbb2ffabc2b6f6badd6963c3ee342df6219373b51" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.695055 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-679b6799cd-9xsrq"] Jan 22 15:44:50 crc kubenswrapper[4825]: E0122 15:44:50.695665 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2480086-9709-4e61-af71-042055623d32" containerName="placement-db-sync" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.695691 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2480086-9709-4e61-af71-042055623d32" containerName="placement-db-sync" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.695877 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2480086-9709-4e61-af71-042055623d32" containerName="placement-db-sync" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.697556 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.699857 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.700504 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.706887 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5e86f1cc-a4dd-4f8f-b9b3-18806405875a","Type":"ContainerStarted","Data":"ce5e8a4589d1dccbbff53e439a2daadb44bd9b7499dab0d133cbd995fd47ef27"} Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.724615 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.737071 4825 generic.go:334] "Generic (PLEG): container finished" podID="589c7924-baff-443f-b923-59a1348c709a" containerID="1ce120a81de5582340734a40599a7e8e65c947a0c732dfa76d07436a8232dab5" exitCode=0 Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.739415 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fxh4" event={"ID":"589c7924-baff-443f-b923-59a1348c709a","Type":"ContainerDied","Data":"1ce120a81de5582340734a40599a7e8e65c947a0c732dfa76d07436a8232dab5"} Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.741069 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-679b6799cd-9xsrq"] Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.823672 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-config-data" (OuterVolumeSpecName: "config-data") pod "d2480086-9709-4e61-af71-042055623d32" (UID: "d2480086-9709-4e61-af71-042055623d32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.827513 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-public-tls-certs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.827824 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-config-data\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.827904 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-combined-ca-bundle\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.827966 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjvc\" (UniqueName: \"kubernetes.io/projected/9634c15f-16c7-43e2-877b-934fa9467de7-kube-api-access-wsjvc\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.828092 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9634c15f-16c7-43e2-877b-934fa9467de7-logs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.828228 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-internal-tls-certs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.828378 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-scripts\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.828513 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2480086-9709-4e61-af71-042055623d32-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931220 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-public-tls-certs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931374 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-config-data\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931416 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-combined-ca-bundle\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931445 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjvc\" (UniqueName: \"kubernetes.io/projected/9634c15f-16c7-43e2-877b-934fa9467de7-kube-api-access-wsjvc\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931483 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9634c15f-16c7-43e2-877b-934fa9467de7-logs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931556 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-internal-tls-certs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.931612 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-scripts\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.933750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9634c15f-16c7-43e2-877b-934fa9467de7-logs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.935951 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-combined-ca-bundle\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.937327 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-scripts\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.939611 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-internal-tls-certs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.941386 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-config-data\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.943184 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9634c15f-16c7-43e2-877b-934fa9467de7-public-tls-certs\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:50 crc kubenswrapper[4825]: I0122 15:44:50.961440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjvc\" (UniqueName: \"kubernetes.io/projected/9634c15f-16c7-43e2-877b-934fa9467de7-kube-api-access-wsjvc\") pod \"placement-679b6799cd-9xsrq\" (UID: \"9634c15f-16c7-43e2-877b-934fa9467de7\") " pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.042025 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.554421 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.681541 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.744114 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-config\") pod \"3d53d147-8362-48e9-b525-44249e49ae01\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.744310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-dns-svc\") pod \"3d53d147-8362-48e9-b525-44249e49ae01\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.744387 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-sb\") pod \"3d53d147-8362-48e9-b525-44249e49ae01\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.744492 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdf8j\" (UniqueName: \"kubernetes.io/projected/3d53d147-8362-48e9-b525-44249e49ae01-kube-api-access-bdf8j\") pod \"3d53d147-8362-48e9-b525-44249e49ae01\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.744589 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-nb\") pod \"3d53d147-8362-48e9-b525-44249e49ae01\" (UID: \"3d53d147-8362-48e9-b525-44249e49ae01\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.756451 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63da9616-1db2-49a5-8591-9c8bdbbb43a3","Type":"ContainerStarted","Data":"fe96204d9b101fc9c165f595318955f2f332311a76b05f8e0cbd8b217ee36320"} Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.762256 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d53d147-8362-48e9-b525-44249e49ae01-kube-api-access-bdf8j" (OuterVolumeSpecName: "kube-api-access-bdf8j") pod "3d53d147-8362-48e9-b525-44249e49ae01" (UID: "3d53d147-8362-48e9-b525-44249e49ae01"). InnerVolumeSpecName "kube-api-access-bdf8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.764090 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" event={"ID":"3d53d147-8362-48e9-b525-44249e49ae01","Type":"ContainerDied","Data":"ddcea4628b3d8306e6d50e5d512a15a6df54c690ca0ca23634307cb0c20f12f2"} Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.764138 4825 scope.go:117] "RemoveContainer" containerID="dc4e83a0b3ba33d512d64c0bb483c49691ed9e4523e60de0ca68207752301676" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.764192 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bkbj9" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.787348 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwfvj" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.787589 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwfvj" event={"ID":"b3ca6d27-7a63-4ba6-8baf-8180c79cd810","Type":"ContainerDied","Data":"0b70ff5c320f4d247804f147af28239c72702199beeb72a5a2f06dfb71861a15"} Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.787618 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b70ff5c320f4d247804f147af28239c72702199beeb72a5a2f06dfb71861a15" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.791282 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-cshtw" event={"ID":"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e","Type":"ContainerStarted","Data":"a6111f4cbe06aa79016ce81ccf5f78226b8a584b6987659384641d40833ca7a4"} Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.795364 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5e86f1cc-a4dd-4f8f-b9b3-18806405875a","Type":"ContainerStarted","Data":"8c3241f400a70e7b8917c2a01619d54ee026b3065b3d232b50b0afbece8406e4"} Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.817072 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-cshtw" podStartSLOduration=4.812679495 podStartE2EDuration="53.817050375s" podCreationTimestamp="2026-01-22 15:43:58 +0000 UTC" firstStartedPulling="2026-01-22 15:44:00.767394143 +0000 UTC m=+1187.528921053" lastFinishedPulling="2026-01-22 15:44:49.771765023 +0000 UTC m=+1236.533291933" observedRunningTime="2026-01-22 15:44:51.806721449 +0000 UTC m=+1238.568248379" watchObservedRunningTime="2026-01-22 15:44:51.817050375 +0000 UTC m=+1238.578577285" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.832337 4825 scope.go:117] "RemoveContainer" containerID="d714f29a3362d4285848e566dd1864b577f3eb2114a4830e8e6532627c8c56ae" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.846837 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-credential-keys\") pod \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.847026 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-scripts\") pod \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.847211 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-combined-ca-bundle\") pod \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.847254 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9bd4\" (UniqueName: \"kubernetes.io/projected/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-kube-api-access-v9bd4\") pod \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.847343 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-config-data\") pod \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.847389 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-fernet-keys\") pod \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\" (UID: \"b3ca6d27-7a63-4ba6-8baf-8180c79cd810\") " Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.849300 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdf8j\" (UniqueName: \"kubernetes.io/projected/3d53d147-8362-48e9-b525-44249e49ae01-kube-api-access-bdf8j\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.852307 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b3ca6d27-7a63-4ba6-8baf-8180c79cd810" (UID: "b3ca6d27-7a63-4ba6-8baf-8180c79cd810"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.852362 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-kube-api-access-v9bd4" (OuterVolumeSpecName: "kube-api-access-v9bd4") pod "b3ca6d27-7a63-4ba6-8baf-8180c79cd810" (UID: "b3ca6d27-7a63-4ba6-8baf-8180c79cd810"). InnerVolumeSpecName "kube-api-access-v9bd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.852400 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-scripts" (OuterVolumeSpecName: "scripts") pod "b3ca6d27-7a63-4ba6-8baf-8180c79cd810" (UID: "b3ca6d27-7a63-4ba6-8baf-8180c79cd810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.861489 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b3ca6d27-7a63-4ba6-8baf-8180c79cd810" (UID: "b3ca6d27-7a63-4ba6-8baf-8180c79cd810"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.868036 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d53d147-8362-48e9-b525-44249e49ae01" (UID: "3d53d147-8362-48e9-b525-44249e49ae01"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.868712 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d53d147-8362-48e9-b525-44249e49ae01" (UID: "3d53d147-8362-48e9-b525-44249e49ae01"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.873731 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d53d147-8362-48e9-b525-44249e49ae01" (UID: "3d53d147-8362-48e9-b525-44249e49ae01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.874004 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-config" (OuterVolumeSpecName: "config") pod "3d53d147-8362-48e9-b525-44249e49ae01" (UID: "3d53d147-8362-48e9-b525-44249e49ae01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.953211 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956067 4825 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956080 4825 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956096 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956105 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956114 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956123 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9bd4\" (UniqueName: \"kubernetes.io/projected/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-kube-api-access-v9bd4\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.956133 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d53d147-8362-48e9-b525-44249e49ae01-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.955748 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-config-data" (OuterVolumeSpecName: "config-data") pod "b3ca6d27-7a63-4ba6-8baf-8180c79cd810" (UID: "b3ca6d27-7a63-4ba6-8baf-8180c79cd810"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:51 crc kubenswrapper[4825]: I0122 15:44:51.993653 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ca6d27-7a63-4ba6-8baf-8180c79cd810" (UID: "b3ca6d27-7a63-4ba6-8baf-8180c79cd810"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.008436 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-679b6799cd-9xsrq"] Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.057202 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.057249 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ca6d27-7a63-4ba6-8baf-8180c79cd810-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.349363 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bkbj9"] Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.359801 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bkbj9"] Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.414528 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.661135 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-db-sync-config-data\") pod \"589c7924-baff-443f-b923-59a1348c709a\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.661306 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-combined-ca-bundle\") pod \"589c7924-baff-443f-b923-59a1348c709a\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.661374 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhrx4\" (UniqueName: \"kubernetes.io/projected/589c7924-baff-443f-b923-59a1348c709a-kube-api-access-qhrx4\") pod \"589c7924-baff-443f-b923-59a1348c709a\" (UID: \"589c7924-baff-443f-b923-59a1348c709a\") " Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.665892 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589c7924-baff-443f-b923-59a1348c709a-kube-api-access-qhrx4" (OuterVolumeSpecName: "kube-api-access-qhrx4") pod "589c7924-baff-443f-b923-59a1348c709a" (UID: "589c7924-baff-443f-b923-59a1348c709a"). InnerVolumeSpecName "kube-api-access-qhrx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.670707 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "589c7924-baff-443f-b923-59a1348c709a" (UID: "589c7924-baff-443f-b923-59a1348c709a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.710161 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "589c7924-baff-443f-b923-59a1348c709a" (UID: "589c7924-baff-443f-b923-59a1348c709a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.771394 4825 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.771581 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589c7924-baff-443f-b923-59a1348c709a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.771669 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhrx4\" (UniqueName: \"kubernetes.io/projected/589c7924-baff-443f-b923-59a1348c709a-kube-api-access-qhrx4\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.795053 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bfd68784d-7vgv2"] Jan 22 15:44:52 crc kubenswrapper[4825]: E0122 15:44:52.795957 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d53d147-8362-48e9-b525-44249e49ae01" containerName="init" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.795997 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d53d147-8362-48e9-b525-44249e49ae01" containerName="init" Jan 22 15:44:52 crc kubenswrapper[4825]: E0122 15:44:52.796020 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ca6d27-7a63-4ba6-8baf-8180c79cd810" containerName="keystone-bootstrap" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.796032 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ca6d27-7a63-4ba6-8baf-8180c79cd810" containerName="keystone-bootstrap" Jan 22 15:44:52 crc kubenswrapper[4825]: E0122 15:44:52.796059 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589c7924-baff-443f-b923-59a1348c709a" containerName="barbican-db-sync" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.796066 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="589c7924-baff-443f-b923-59a1348c709a" containerName="barbican-db-sync" Jan 22 15:44:52 crc kubenswrapper[4825]: E0122 15:44:52.796089 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d53d147-8362-48e9-b525-44249e49ae01" containerName="dnsmasq-dns" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.796096 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d53d147-8362-48e9-b525-44249e49ae01" containerName="dnsmasq-dns" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.796359 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ca6d27-7a63-4ba6-8baf-8180c79cd810" containerName="keystone-bootstrap" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.796378 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="589c7924-baff-443f-b923-59a1348c709a" containerName="barbican-db-sync" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.796396 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d53d147-8362-48e9-b525-44249e49ae01" containerName="dnsmasq-dns" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.797204 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.800695 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g9wlz" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.801004 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.801222 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.801567 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.801790 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.802047 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.811028 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bfd68784d-7vgv2"] Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875267 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-internal-tls-certs\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875339 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-scripts\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875365 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-combined-ca-bundle\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875392 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-credential-keys\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875422 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-fernet-keys\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875447 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-public-tls-certs\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875474 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jsqc\" (UniqueName: \"kubernetes.io/projected/e2f8bb1f-7234-465d-96ba-cd26f508d35a-kube-api-access-5jsqc\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.875504 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-config-data\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.898043 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fxh4" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.898918 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fxh4" event={"ID":"589c7924-baff-443f-b923-59a1348c709a","Type":"ContainerDied","Data":"de55792865ba5a02c129844a00905c9b042bd407f7ec656fe41368c8fafe5d2f"} Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.898944 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de55792865ba5a02c129844a00905c9b042bd407f7ec656fe41368c8fafe5d2f" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.901872 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63da9616-1db2-49a5-8591-9c8bdbbb43a3","Type":"ContainerStarted","Data":"ef37cd4c07f479e49fce30a552f1592858bb9c5965d59c1a3c43b769e75c9029"} Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.938227 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-679b6799cd-9xsrq" event={"ID":"9634c15f-16c7-43e2-877b-934fa9467de7","Type":"ContainerStarted","Data":"41c7c23b22b0479760ae18a66de149520894b99d791298ac74a08e56704f0419"} Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.938277 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-679b6799cd-9xsrq" event={"ID":"9634c15f-16c7-43e2-877b-934fa9467de7","Type":"ContainerStarted","Data":"35fc39f0af7780c5e9e6313523ed7d48a768fa7d09f4ec11e3b33e6925e90e69"} Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.963771 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.963746597 podStartE2EDuration="4.963746597s" podCreationTimestamp="2026-01-22 15:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:52.938587364 +0000 UTC m=+1239.700114274" watchObservedRunningTime="2026-01-22 15:44:52.963746597 +0000 UTC m=+1239.725273507" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978308 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-fernet-keys\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978392 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-public-tls-certs\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978416 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jsqc\" (UniqueName: \"kubernetes.io/projected/e2f8bb1f-7234-465d-96ba-cd26f508d35a-kube-api-access-5jsqc\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978451 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-config-data\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978555 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-internal-tls-certs\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978607 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-scripts\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978631 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-combined-ca-bundle\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.978655 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-credential-keys\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.986088 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-fernet-keys\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.989566 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-scripts\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.990797 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-config-data\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:52 crc kubenswrapper[4825]: I0122 15:44:52.995511 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-public-tls-certs\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.006503 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-credential-keys\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.006773 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jsqc\" (UniqueName: \"kubernetes.io/projected/e2f8bb1f-7234-465d-96ba-cd26f508d35a-kube-api-access-5jsqc\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.007237 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-internal-tls-certs\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.009918 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f8bb1f-7234-465d-96ba-cd26f508d35a-combined-ca-bundle\") pod \"keystone-7bfd68784d-7vgv2\" (UID: \"e2f8bb1f-7234-465d-96ba-cd26f508d35a\") " pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.126109 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57fcb6778b-725vd"] Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.128000 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.134601 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.135324 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jd6zq" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.135581 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.142239 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.148089 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f54bfddd7-7dxc5"] Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.150042 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.171620 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.267024 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57fcb6778b-725vd"] Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.267365 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f54bfddd7-7dxc5"] Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345563 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731b7ae4-c576-4832-851c-0a832ad56e31-logs\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345605 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345652 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-combined-ca-bundle\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345688 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-combined-ca-bundle\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345742 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lz46\" (UniqueName: \"kubernetes.io/projected/ce3ac0ed-81fc-479f-b4b5-2448549178d2-kube-api-access-5lz46\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345848 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data-custom\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345938 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce3ac0ed-81fc-479f-b4b5-2448549178d2-logs\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.345997 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng625\" (UniqueName: \"kubernetes.io/projected/731b7ae4-c576-4832-851c-0a832ad56e31-kube-api-access-ng625\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.346045 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data-custom\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.346073 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.416654 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-llsm4"] Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.418698 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493620 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data-custom\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493668 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-config\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493694 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493770 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skcs\" (UniqueName: \"kubernetes.io/projected/10c3d4a5-599e-4f58-9062-9095ea1afd1a-kube-api-access-9skcs\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493814 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731b7ae4-c576-4832-851c-0a832ad56e31-logs\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493842 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493885 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-combined-ca-bundle\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.493937 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-combined-ca-bundle\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494020 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494047 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lz46\" (UniqueName: \"kubernetes.io/projected/ce3ac0ed-81fc-479f-b4b5-2448549178d2-kube-api-access-5lz46\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494096 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494178 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data-custom\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494222 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce3ac0ed-81fc-479f-b4b5-2448549178d2-logs\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494239 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494273 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.494327 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng625\" (UniqueName: \"kubernetes.io/projected/731b7ae4-c576-4832-851c-0a832ad56e31-kube-api-access-ng625\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.501944 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731b7ae4-c576-4832-851c-0a832ad56e31-logs\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.502638 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce3ac0ed-81fc-479f-b4b5-2448549178d2-logs\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.507756 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-combined-ca-bundle\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.513791 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.514149 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data-custom\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.516888 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-combined-ca-bundle\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.520603 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.538010 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data-custom\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.769273 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-config\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.769329 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skcs\" (UniqueName: \"kubernetes.io/projected/10c3d4a5-599e-4f58-9062-9095ea1afd1a-kube-api-access-9skcs\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.769396 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.769439 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.769504 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.769527 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.770441 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.771290 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-config\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.773064 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.774188 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:53 crc kubenswrapper[4825]: I0122 15:44:53.774915 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.357365 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng625\" (UniqueName: \"kubernetes.io/projected/731b7ae4-c576-4832-851c-0a832ad56e31-kube-api-access-ng625\") pod \"barbican-worker-f54bfddd7-7dxc5\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.507331 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.514952 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lz46\" (UniqueName: \"kubernetes.io/projected/ce3ac0ed-81fc-479f-b4b5-2448549178d2-kube-api-access-5lz46\") pod \"barbican-keystone-listener-57fcb6778b-725vd\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.517277 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skcs\" (UniqueName: \"kubernetes.io/projected/10c3d4a5-599e-4f58-9062-9095ea1afd1a-kube-api-access-9skcs\") pod \"dnsmasq-dns-848cf88cfc-llsm4\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.528869 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d53d147-8362-48e9-b525-44249e49ae01" path="/var/lib/kubelet/pods/3d53d147-8362-48e9-b525-44249e49ae01/volumes" Jan 22 15:44:54 crc kubenswrapper[4825]: E0122 15:44:54.530054 4825 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.530132 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-llsm4"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.564785 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-679b6799cd-9xsrq" event={"ID":"9634c15f-16c7-43e2-877b-934fa9467de7","Type":"ContainerStarted","Data":"cc08d39eac2ec83060fd981f45d8dda7829a2e4fa784b882b02bfb3ab7a9797f"} Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.564962 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.565081 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.574258 4825 generic.go:334] "Generic (PLEG): container finished" podID="7211decb-e02d-47e6-9ea7-493e8e6a3743" containerID="2e6b7f886ffb9b325b3310026c9c248243d9843bf701112f7ebfb5473836de6e" exitCode=0 Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.575946 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-54vjv" event={"ID":"7211decb-e02d-47e6-9ea7-493e8e6a3743","Type":"ContainerDied","Data":"2e6b7f886ffb9b325b3310026c9c248243d9843bf701112f7ebfb5473836de6e"} Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.587479 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.604272 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-76545dccfd-vqph7"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.616904 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.627623 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v725\" (UniqueName: \"kubernetes.io/projected/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-kube-api-access-7v725\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.627714 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-config-data-custom\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.627748 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-config-data\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.627767 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-combined-ca-bundle\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.627858 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-logs\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.651511 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.689042 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-894b498b5-mnnlr"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.690769 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.700652 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76545dccfd-vqph7"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.714492 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-894b498b5-mnnlr"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.729969 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v725\" (UniqueName: \"kubernetes.io/projected/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-kube-api-access-7v725\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.730103 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-config-data-custom\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.730142 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-config-data\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.730181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-combined-ca-bundle\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.733332 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-logs\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.734548 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-logs\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.750062 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-657ccd9fc8-rsx5r"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.752162 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.755295 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.794246 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-657ccd9fc8-rsx5r"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.801275 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-config-data-custom\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.802003 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-config-data\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.887998 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-combined-ca-bundle\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889213 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349354a1-c3d7-4f6a-b85a-3a7b490b98da-logs\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889517 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-config-data\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889606 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-config-data-custom\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889654 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v725\" (UniqueName: \"kubernetes.io/projected/1ccd62bc-d183-4918-91d6-fd5be08f6dc1-kube-api-access-7v725\") pod \"barbican-keystone-listener-76545dccfd-vqph7\" (UID: \"1ccd62bc-d183-4918-91d6-fd5be08f6dc1\") " pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889721 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6952fded-9cdf-4220-9f73-ff832415b100-logs\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889874 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.889958 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-combined-ca-bundle\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.890092 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-combined-ca-bundle\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.890461 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data-custom\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.890587 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttr2\" (UniqueName: \"kubernetes.io/projected/349354a1-c3d7-4f6a-b85a-3a7b490b98da-kube-api-access-kttr2\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.895132 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxs2p\" (UniqueName: \"kubernetes.io/projected/6952fded-9cdf-4220-9f73-ff832415b100-kube-api-access-bxs2p\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.942263 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bfd68784d-7vgv2"] Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.948856 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-679b6799cd-9xsrq" podStartSLOduration=4.9488372 podStartE2EDuration="4.9488372s" podCreationTimestamp="2026-01-22 15:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:54.658424865 +0000 UTC m=+1241.419951775" watchObservedRunningTime="2026-01-22 15:44:54.9488372 +0000 UTC m=+1241.710364110" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.974412 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997451 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349354a1-c3d7-4f6a-b85a-3a7b490b98da-logs\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997554 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-config-data\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997590 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-config-data-custom\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997639 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6952fded-9cdf-4220-9f73-ff832415b100-logs\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997719 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997743 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-combined-ca-bundle\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997806 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-combined-ca-bundle\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997830 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data-custom\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997886 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kttr2\" (UniqueName: \"kubernetes.io/projected/349354a1-c3d7-4f6a-b85a-3a7b490b98da-kube-api-access-kttr2\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.997917 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxs2p\" (UniqueName: \"kubernetes.io/projected/6952fded-9cdf-4220-9f73-ff832415b100-kube-api-access-bxs2p\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:54 crc kubenswrapper[4825]: I0122 15:44:54.999648 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6952fded-9cdf-4220-9f73-ff832415b100-logs\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.000827 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/349354a1-c3d7-4f6a-b85a-3a7b490b98da-logs\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.006466 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data-custom\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.015659 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-combined-ca-bundle\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.016780 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kttr2\" (UniqueName: \"kubernetes.io/projected/349354a1-c3d7-4f6a-b85a-3a7b490b98da-kube-api-access-kttr2\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.019726 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-config-data\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.020926 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxs2p\" (UniqueName: \"kubernetes.io/projected/6952fded-9cdf-4220-9f73-ff832415b100-kube-api-access-bxs2p\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.021811 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/349354a1-c3d7-4f6a-b85a-3a7b490b98da-config-data-custom\") pod \"barbican-worker-894b498b5-mnnlr\" (UID: \"349354a1-c3d7-4f6a-b85a-3a7b490b98da\") " pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.045198 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-combined-ca-bundle\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.070124 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data\") pod \"barbican-api-657ccd9fc8-rsx5r\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.127335 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.308461 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-894b498b5-mnnlr" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.482833 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57fcb6778b-725vd"] Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.491174 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f54bfddd7-7dxc5"] Jan 22 15:44:55 crc kubenswrapper[4825]: W0122 15:44:55.492055 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce3ac0ed_81fc_479f_b4b5_2448549178d2.slice/crio-e853b43ab16952656184677c9e65e25af280869c2eb289d2d588ed77d2b6e03c WatchSource:0}: Error finding container e853b43ab16952656184677c9e65e25af280869c2eb289d2d588ed77d2b6e03c: Status 404 returned error can't find the container with id e853b43ab16952656184677c9e65e25af280869c2eb289d2d588ed77d2b6e03c Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.747224 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" event={"ID":"ce3ac0ed-81fc-479f-b4b5-2448549178d2","Type":"ContainerStarted","Data":"e853b43ab16952656184677c9e65e25af280869c2eb289d2d588ed77d2b6e03c"} Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.770918 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f54bfddd7-7dxc5" event={"ID":"731b7ae4-c576-4832-851c-0a832ad56e31","Type":"ContainerStarted","Data":"99227ab0f96fa866212b0c4e3f027a152bcd416c3a7e29af4afa7db1e80f479c"} Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.781141 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5e86f1cc-a4dd-4f8f-b9b3-18806405875a","Type":"ContainerStarted","Data":"aca2265505e3d7bf617ccb5f32f2e1e77e1a1fdd3326cfdb46be0e12e5685489"} Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.788152 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-llsm4"] Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.789747 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bfd68784d-7vgv2" event={"ID":"e2f8bb1f-7234-465d-96ba-cd26f508d35a","Type":"ContainerStarted","Data":"537e4aa5be19aa67289d1a256013d80c1adb67201013ba360df8421ef318f9fd"} Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.789789 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bfd68784d-7vgv2" event={"ID":"e2f8bb1f-7234-465d-96ba-cd26f508d35a","Type":"ContainerStarted","Data":"dcb5308119ef3847d9e67a09e5d8a9d8b654000348695209946d80ff04b59b5b"} Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.828399 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.828376904 podStartE2EDuration="7.828376904s" podCreationTimestamp="2026-01-22 15:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:55.80871807 +0000 UTC m=+1242.570244980" watchObservedRunningTime="2026-01-22 15:44:55.828376904 +0000 UTC m=+1242.589903814" Jan 22 15:44:55 crc kubenswrapper[4825]: I0122 15:44:55.990946 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bfd68784d-7vgv2" podStartSLOduration=3.9909296100000002 podStartE2EDuration="3.99092961s" podCreationTimestamp="2026-01-22 15:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:55.968348691 +0000 UTC m=+1242.729875602" watchObservedRunningTime="2026-01-22 15:44:55.99092961 +0000 UTC m=+1242.752456520" Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.109481 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76545dccfd-vqph7"] Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.520630 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-657ccd9fc8-rsx5r"] Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.529854 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-894b498b5-mnnlr"] Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.840552 4825 generic.go:334] "Generic (PLEG): container finished" podID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerID="04ac521c4eb65249f7edb27177d6bbb3fac4619fb171e7799b60c28c47a08915" exitCode=0 Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.840925 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" event={"ID":"10c3d4a5-599e-4f58-9062-9095ea1afd1a","Type":"ContainerDied","Data":"04ac521c4eb65249f7edb27177d6bbb3fac4619fb171e7799b60c28c47a08915"} Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.840960 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" event={"ID":"10c3d4a5-599e-4f58-9062-9095ea1afd1a","Type":"ContainerStarted","Data":"be584ece69a6b5d4cf6330893034da1b2bc89749630de7d2cf854ed8ec4f404a"} Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.847302 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657ccd9fc8-rsx5r" event={"ID":"6952fded-9cdf-4220-9f73-ff832415b100","Type":"ContainerStarted","Data":"aefca7933ee4150f9bb7c56f5bfc5cd7f1645913cfb1cb8245e4aac1123fd9f6"} Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.867297 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-894b498b5-mnnlr" event={"ID":"349354a1-c3d7-4f6a-b85a-3a7b490b98da","Type":"ContainerStarted","Data":"4e711fcc6c1231a90c68327cc51de4f9d11de058b7f370c30a4a000aa7c6e410"} Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.887418 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" event={"ID":"1ccd62bc-d183-4918-91d6-fd5be08f6dc1","Type":"ContainerStarted","Data":"ccfe1401afef409fdb6089f6000172a4d349e43b9ba631a36329f3839ccd94ee"} Jan 22 15:44:56 crc kubenswrapper[4825]: I0122 15:44:56.888274 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.076914 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-54vjv" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.181612 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dspld\" (UniqueName: \"kubernetes.io/projected/7211decb-e02d-47e6-9ea7-493e8e6a3743-kube-api-access-dspld\") pod \"7211decb-e02d-47e6-9ea7-493e8e6a3743\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.181681 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7211decb-e02d-47e6-9ea7-493e8e6a3743-etc-machine-id\") pod \"7211decb-e02d-47e6-9ea7-493e8e6a3743\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.181797 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-config-data\") pod \"7211decb-e02d-47e6-9ea7-493e8e6a3743\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.181928 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7211decb-e02d-47e6-9ea7-493e8e6a3743-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7211decb-e02d-47e6-9ea7-493e8e6a3743" (UID: "7211decb-e02d-47e6-9ea7-493e8e6a3743"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.181970 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-combined-ca-bundle\") pod \"7211decb-e02d-47e6-9ea7-493e8e6a3743\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.182082 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-db-sync-config-data\") pod \"7211decb-e02d-47e6-9ea7-493e8e6a3743\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.182151 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-scripts\") pod \"7211decb-e02d-47e6-9ea7-493e8e6a3743\" (UID: \"7211decb-e02d-47e6-9ea7-493e8e6a3743\") " Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.182729 4825 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7211decb-e02d-47e6-9ea7-493e8e6a3743-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.187544 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7211decb-e02d-47e6-9ea7-493e8e6a3743-kube-api-access-dspld" (OuterVolumeSpecName: "kube-api-access-dspld") pod "7211decb-e02d-47e6-9ea7-493e8e6a3743" (UID: "7211decb-e02d-47e6-9ea7-493e8e6a3743"). InnerVolumeSpecName "kube-api-access-dspld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.190245 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-scripts" (OuterVolumeSpecName: "scripts") pod "7211decb-e02d-47e6-9ea7-493e8e6a3743" (UID: "7211decb-e02d-47e6-9ea7-493e8e6a3743"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.192899 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7211decb-e02d-47e6-9ea7-493e8e6a3743" (UID: "7211decb-e02d-47e6-9ea7-493e8e6a3743"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.233528 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7211decb-e02d-47e6-9ea7-493e8e6a3743" (UID: "7211decb-e02d-47e6-9ea7-493e8e6a3743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.255889 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-config-data" (OuterVolumeSpecName: "config-data") pod "7211decb-e02d-47e6-9ea7-493e8e6a3743" (UID: "7211decb-e02d-47e6-9ea7-493e8e6a3743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:44:57 crc kubenswrapper[4825]: E0122 15:44:57.280425 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10c3d4a5_599e_4f58_9062_9095ea1afd1a.slice/crio-04ac521c4eb65249f7edb27177d6bbb3fac4619fb171e7799b60c28c47a08915.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.285632 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.285672 4825 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.285688 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.285701 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dspld\" (UniqueName: \"kubernetes.io/projected/7211decb-e02d-47e6-9ea7-493e8e6a3743-kube-api-access-dspld\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:57 crc kubenswrapper[4825]: I0122 15:44:57.285714 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7211decb-e02d-47e6-9ea7-493e8e6a3743-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.082002 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-54vjv" event={"ID":"7211decb-e02d-47e6-9ea7-493e8e6a3743","Type":"ContainerDied","Data":"32a781f74fd2048e112452706ee998cf2c328818530bb5df2f8158dc3b2de71b"} Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.082365 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a781f74fd2048e112452706ee998cf2c328818530bb5df2f8158dc3b2de71b" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.082287 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-54vjv" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.202460 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" event={"ID":"10c3d4a5-599e-4f58-9062-9095ea1afd1a","Type":"ContainerStarted","Data":"1ff03f1fe6fa44470877c2137fceeb7b6addcc9ccd173669865f58308055469e"} Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.202911 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.219323 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657ccd9fc8-rsx5r" event={"ID":"6952fded-9cdf-4220-9f73-ff832415b100","Type":"ContainerStarted","Data":"b163a390fb9a5d881230c6615588e783f43c1aad4e1c098c6534681b5024fc4f"} Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.219630 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657ccd9fc8-rsx5r" event={"ID":"6952fded-9cdf-4220-9f73-ff832415b100","Type":"ContainerStarted","Data":"74a9c1526b354517aa97d6b5e15e8e0cbf6d23aa8dfbc2cebff9bc0d4114c925"} Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.246625 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" podStartSLOduration=5.246596998 podStartE2EDuration="5.246596998s" podCreationTimestamp="2026-01-22 15:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:58.228766907 +0000 UTC m=+1244.990293817" watchObservedRunningTime="2026-01-22 15:44:58.246596998 +0000 UTC m=+1245.008123908" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.269201 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-657ccd9fc8-rsx5r" podStartSLOduration=5.269177826 podStartE2EDuration="5.269177826s" podCreationTimestamp="2026-01-22 15:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:44:58.262524356 +0000 UTC m=+1245.024051266" watchObservedRunningTime="2026-01-22 15:44:58.269177826 +0000 UTC m=+1245.030704736" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.961461 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:58 crc kubenswrapper[4825]: I0122 15:44:58.962015 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.427455 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.427534 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.542807 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.627150 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.628709 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.628824 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.630174 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.630202 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.630214 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.630225 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.631670 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:44:59 crc kubenswrapper[4825]: E0122 15:44:59.632784 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7211decb-e02d-47e6-9ea7-493e8e6a3743" containerName="cinder-db-sync" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.632811 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7211decb-e02d-47e6-9ea7-493e8e6a3743" containerName="cinder-db-sync" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.634288 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7211decb-e02d-47e6-9ea7-493e8e6a3743" containerName="cinder-db-sync" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.635905 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.646418 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.646683 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hgzqk" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.646825 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.647006 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.672050 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.743487 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-llsm4"] Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.766022 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20602869-cdc8-49cb-82ae-36d1c720f637-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.766069 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gcwm\" (UniqueName: \"kubernetes.io/projected/20602869-cdc8-49cb-82ae-36d1c720f637-kube-api-access-4gcwm\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.766177 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.766250 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.766332 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.766379 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-scripts\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.782596 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-npwwm"] Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.784415 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.793513 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-npwwm"] Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.838591 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868465 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-svc\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868537 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868606 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868634 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868665 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tscdj\" (UniqueName: \"kubernetes.io/projected/0e71e054-4364-4dc1-9eee-8ff7f6cac148-kube-api-access-tscdj\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868711 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868752 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868806 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-scripts\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868876 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20602869-cdc8-49cb-82ae-36d1c720f637-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.868907 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gcwm\" (UniqueName: \"kubernetes.io/projected/20602869-cdc8-49cb-82ae-36d1c720f637-kube-api-access-4gcwm\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.869039 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-config\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.869083 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.869775 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20602869-cdc8-49cb-82ae-36d1c720f637-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.882674 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.888163 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-scripts\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.903321 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.905711 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.907519 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.911930 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.912464 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gcwm\" (UniqueName: \"kubernetes.io/projected/20602869-cdc8-49cb-82ae-36d1c720f637-kube-api-access-4gcwm\") pod \"cinder-scheduler-0\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " pod="openstack/cinder-scheduler-0" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.924431 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.930104 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.972909 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-config\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.972994 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-svc\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.973084 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.973104 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.973129 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tscdj\" (UniqueName: \"kubernetes.io/projected/0e71e054-4364-4dc1-9eee-8ff7f6cac148-kube-api-access-tscdj\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.973170 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.978483 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-config\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.979129 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-svc\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.980391 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.980874 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.984174 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:44:59 crc kubenswrapper[4825]: I0122 15:44:59.995014 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.001602 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tscdj\" (UniqueName: \"kubernetes.io/projected/0e71e054-4364-4dc1-9eee-8ff7f6cac148-kube-api-access-tscdj\") pod \"dnsmasq-dns-6578955fd5-npwwm\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075011 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075063 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lq8\" (UniqueName: \"kubernetes.io/projected/ffacb6a6-bce4-41f5-b611-1b0e80970b36-kube-api-access-89lq8\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075297 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffacb6a6-bce4-41f5-b611-1b0e80970b36-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075314 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075400 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffacb6a6-bce4-41f5-b611-1b0e80970b36-logs\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075460 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-scripts\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.075490 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.128934 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.157041 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54b58d9b7-q6gvq"] Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.157545 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54b58d9b7-q6gvq" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-api" containerID="cri-o://7a439a656d03eea0e003c2fffeeb76bcdcabdadc037f986b8c6752f41bae7a0a" gracePeriod=30 Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.157841 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54b58d9b7-q6gvq" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-httpd" containerID="cri-o://84422cb4707968ad06ffc4146cc9027256b643f7fd3fc6324392e02e1bcdf2e0" gracePeriod=30 Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178398 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178506 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89lq8\" (UniqueName: \"kubernetes.io/projected/ffacb6a6-bce4-41f5-b611-1b0e80970b36-kube-api-access-89lq8\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178577 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffacb6a6-bce4-41f5-b611-1b0e80970b36-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178600 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178725 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffacb6a6-bce4-41f5-b611-1b0e80970b36-logs\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178797 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-scripts\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.178842 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.180065 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4"] Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.180103 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffacb6a6-bce4-41f5-b611-1b0e80970b36-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.181729 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.184346 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffacb6a6-bce4-41f5-b611-1b0e80970b36-logs\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.191568 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.192447 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.213271 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data-custom\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.215215 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.217659 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-scripts\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.217745 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4"] Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.217878 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.229840 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89lq8\" (UniqueName: \"kubernetes.io/projected/ffacb6a6-bce4-41f5-b611-1b0e80970b36-kube-api-access-89lq8\") pod \"cinder-api-0\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.230962 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fdbbbd487-qbcwc"] Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.237601 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.253966 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fdbbbd487-qbcwc"] Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.282188 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75bdq\" (UniqueName: \"kubernetes.io/projected/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-kube-api-access-75bdq\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.282341 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-secret-volume\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.282417 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-config-volume\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.305494 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54b58d9b7-q6gvq" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.178:9696/\": read tcp 10.217.0.2:36390->10.217.0.178:9696: read: connection reset by peer" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.385868 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75bdq\" (UniqueName: \"kubernetes.io/projected/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-kube-api-access-75bdq\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.385932 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-config\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386013 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-public-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386049 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-httpd-config\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386095 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfkq\" (UniqueName: \"kubernetes.io/projected/d0008df0-93d9-43ac-b31b-3eed1b711628-kube-api-access-jzfkq\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386124 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-ovndb-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386162 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-internal-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386194 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-secret-volume\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386229 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-config-volume\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.386254 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-combined-ca-bundle\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.387720 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-config-volume\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.390642 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.391215 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-secret-volume\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:00 crc kubenswrapper[4825]: I0122 15:45:00.408073 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75bdq\" (UniqueName: \"kubernetes.io/projected/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-kube-api-access-75bdq\") pod \"collect-profiles-29484945-ptff4\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325234 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-internal-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325317 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-combined-ca-bundle\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325452 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-config\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325523 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-public-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325552 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-httpd-config\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325588 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfkq\" (UniqueName: \"kubernetes.io/projected/d0008df0-93d9-43ac-b31b-3eed1b711628-kube-api-access-jzfkq\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.325613 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-ovndb-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.326292 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.353076 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-combined-ca-bundle\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.353733 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-httpd-config\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.354384 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-ovndb-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.358023 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-config\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.358386 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-public-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.391139 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0008df0-93d9-43ac-b31b-3eed1b711628-internal-tls-certs\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.434782 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfkq\" (UniqueName: \"kubernetes.io/projected/d0008df0-93d9-43ac-b31b-3eed1b711628-kube-api-access-jzfkq\") pod \"neutron-5fdbbbd487-qbcwc\" (UID: \"d0008df0-93d9-43ac-b31b-3eed1b711628\") " pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.444686 4825 generic.go:334] "Generic (PLEG): container finished" podID="db48ac29-2967-41a0-9512-9317757070a9" containerID="84422cb4707968ad06ffc4146cc9027256b643f7fd3fc6324392e02e1bcdf2e0" exitCode=0 Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.445803 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerName="dnsmasq-dns" containerID="cri-o://1ff03f1fe6fa44470877c2137fceeb7b6addcc9ccd173669865f58308055469e" gracePeriod=10 Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.446237 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54b58d9b7-q6gvq" event={"ID":"db48ac29-2967-41a0-9512-9317757070a9","Type":"ContainerDied","Data":"84422cb4707968ad06ffc4146cc9027256b643f7fd3fc6324392e02e1bcdf2e0"} Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.446294 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.446506 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 15:45:01 crc kubenswrapper[4825]: I0122 15:45:01.549890 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.037859 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-647867566-jbd62"] Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.041346 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.048260 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.048714 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.059667 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-647867566-jbd62"] Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.226954 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-public-tls-certs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.227105 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6xd\" (UniqueName: \"kubernetes.io/projected/4cc36209-9086-4104-ac1c-0483ff8f05e6-kube-api-access-jz6xd\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.227197 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc36209-9086-4104-ac1c-0483ff8f05e6-logs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.227240 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-config-data\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.227263 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-internal-tls-certs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.227306 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-combined-ca-bundle\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.227353 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-config-data-custom\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331037 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6xd\" (UniqueName: \"kubernetes.io/projected/4cc36209-9086-4104-ac1c-0483ff8f05e6-kube-api-access-jz6xd\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331213 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc36209-9086-4104-ac1c-0483ff8f05e6-logs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331248 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-config-data\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331270 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-internal-tls-certs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331318 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-combined-ca-bundle\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331397 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-config-data-custom\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.331437 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-public-tls-certs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.336715 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-public-tls-certs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.337208 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc36209-9086-4104-ac1c-0483ff8f05e6-logs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.340337 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-config-data-custom\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.346890 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-internal-tls-certs\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.353230 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-combined-ca-bundle\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.354622 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc36209-9086-4104-ac1c-0483ff8f05e6-config-data\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.385739 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6xd\" (UniqueName: \"kubernetes.io/projected/4cc36209-9086-4104-ac1c-0483ff8f05e6-kube-api-access-jz6xd\") pod \"barbican-api-647867566-jbd62\" (UID: \"4cc36209-9086-4104-ac1c-0483ff8f05e6\") " pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.462350 4825 generic.go:334] "Generic (PLEG): container finished" podID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerID="1ff03f1fe6fa44470877c2137fceeb7b6addcc9ccd173669865f58308055469e" exitCode=0 Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.462413 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" event={"ID":"10c3d4a5-599e-4f58-9062-9095ea1afd1a","Type":"ContainerDied","Data":"1ff03f1fe6fa44470877c2137fceeb7b6addcc9ccd173669865f58308055469e"} Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.505643 4825 generic.go:334] "Generic (PLEG): container finished" podID="db48ac29-2967-41a0-9512-9317757070a9" containerID="7a439a656d03eea0e003c2fffeeb76bcdcabdadc037f986b8c6752f41bae7a0a" exitCode=0 Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.506312 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54b58d9b7-q6gvq" event={"ID":"db48ac29-2967-41a0-9512-9317757070a9","Type":"ContainerDied","Data":"7a439a656d03eea0e003c2fffeeb76bcdcabdadc037f986b8c6752f41bae7a0a"} Jan 22 15:45:02 crc kubenswrapper[4825]: I0122 15:45:02.632626 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:03 crc kubenswrapper[4825]: I0122 15:45:03.534926 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:45:03 crc kubenswrapper[4825]: I0122 15:45:03.535422 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.154309 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.179325 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.279906 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-nb\") pod \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.279964 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-internal-tls-certs\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280057 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-httpd-config\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280192 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-combined-ca-bundle\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280302 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-public-tls-certs\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280343 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-sb\") pod \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280373 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9skcs\" (UniqueName: \"kubernetes.io/projected/10c3d4a5-599e-4f58-9062-9095ea1afd1a-kube-api-access-9skcs\") pod \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280418 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bsnh\" (UniqueName: \"kubernetes.io/projected/db48ac29-2967-41a0-9512-9317757070a9-kube-api-access-7bsnh\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280470 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-svc\") pod \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280499 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-ovndb-tls-certs\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280543 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-config\") pod \"db48ac29-2967-41a0-9512-9317757070a9\" (UID: \"db48ac29-2967-41a0-9512-9317757070a9\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280603 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-config\") pod \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.280660 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-swift-storage-0\") pod \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\" (UID: \"10c3d4a5-599e-4f58-9062-9095ea1afd1a\") " Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.301436 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c3d4a5-599e-4f58-9062-9095ea1afd1a-kube-api-access-9skcs" (OuterVolumeSpecName: "kube-api-access-9skcs") pod "10c3d4a5-599e-4f58-9062-9095ea1afd1a" (UID: "10c3d4a5-599e-4f58-9062-9095ea1afd1a"). InnerVolumeSpecName "kube-api-access-9skcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.315354 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db48ac29-2967-41a0-9512-9317757070a9-kube-api-access-7bsnh" (OuterVolumeSpecName: "kube-api-access-7bsnh") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "kube-api-access-7bsnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.321035 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.401018 4825 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.401048 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9skcs\" (UniqueName: \"kubernetes.io/projected/10c3d4a5-599e-4f58-9062-9095ea1afd1a-kube-api-access-9skcs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.401059 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bsnh\" (UniqueName: \"kubernetes.io/projected/db48ac29-2967-41a0-9512-9317757070a9-kube-api-access-7bsnh\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.567734 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "10c3d4a5-599e-4f58-9062-9095ea1afd1a" (UID: "10c3d4a5-599e-4f58-9062-9095ea1afd1a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.601266 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.607659 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.609316 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54b58d9b7-q6gvq" event={"ID":"db48ac29-2967-41a0-9512-9317757070a9","Type":"ContainerDied","Data":"d156b65a89555f98bf8601f74b9f589e67a4c95a8ad5aa398fa60515ac17944e"} Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.609377 4825 scope.go:117] "RemoveContainer" containerID="84422cb4707968ad06ffc4146cc9027256b643f7fd3fc6324392e02e1bcdf2e0" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.609523 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54b58d9b7-q6gvq" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.614139 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" event={"ID":"10c3d4a5-599e-4f58-9062-9095ea1afd1a","Type":"ContainerDied","Data":"be584ece69a6b5d4cf6330893034da1b2bc89749630de7d2cf854ed8ec4f404a"} Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.614217 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-llsm4" Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.617368 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:04 crc kubenswrapper[4825]: I0122 15:45:04.966288 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fdbbbd487-qbcwc"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.123735 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.125014 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.140497 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "10c3d4a5-599e-4f58-9062-9095ea1afd1a" (UID: "10c3d4a5-599e-4f58-9062-9095ea1afd1a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.147923 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.154397 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "10c3d4a5-599e-4f58-9062-9095ea1afd1a" (UID: "10c3d4a5-599e-4f58-9062-9095ea1afd1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.157538 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-config" (OuterVolumeSpecName: "config") pod "10c3d4a5-599e-4f58-9062-9095ea1afd1a" (UID: "10c3d4a5-599e-4f58-9062-9095ea1afd1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.229636 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.229672 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.229685 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.229696 4825 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.269159 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.271294 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.298057 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "10c3d4a5-599e-4f58-9062-9095ea1afd1a" (UID: "10c3d4a5-599e-4f58-9062-9095ea1afd1a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.298331 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-npwwm"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.312201 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.318177 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-config" (OuterVolumeSpecName: "config") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.332862 4825 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.332911 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.332924 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c3d4a5-599e-4f58-9062-9095ea1afd1a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.353932 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-647867566-jbd62"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.395027 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db48ac29-2967-41a0-9512-9317757070a9" (UID: "db48ac29-2967-41a0-9512-9317757070a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.435481 4825 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db48ac29-2967-41a0-9512-9317757070a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.582087 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-llsm4"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.602078 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-llsm4"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.609353 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54b58d9b7-q6gvq"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.618580 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54b58d9b7-q6gvq"] Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.644522 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.646304 4825 generic.go:334] "Generic (PLEG): container finished" podID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" containerID="a6111f4cbe06aa79016ce81ccf5f78226b8a584b6987659384641d40833ca7a4" exitCode=0 Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.646389 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-cshtw" event={"ID":"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e","Type":"ContainerDied","Data":"a6111f4cbe06aa79016ce81ccf5f78226b8a584b6987659384641d40833ca7a4"} Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.653806 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" event={"ID":"ce3ac0ed-81fc-479f-b4b5-2448549178d2","Type":"ContainerStarted","Data":"53b3f499ed24bf51364d752406df5edab9a457a04f767fb65a743cff000ca988"} Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.666968 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-657ccd9fc8-rsx5r" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.667844 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" event={"ID":"1ccd62bc-d183-4918-91d6-fd5be08f6dc1","Type":"ContainerStarted","Data":"093e48d6b8930396e97f8b5662230a6c09d07ba0fb9b720d4630753c7d3fddcd"} Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.984321 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 15:45:05 crc kubenswrapper[4825]: I0122 15:45:05.984452 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:45:06 crc kubenswrapper[4825]: I0122 15:45:06.008596 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 15:45:06 crc kubenswrapper[4825]: I0122 15:45:06.008712 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:45:06 crc kubenswrapper[4825]: I0122 15:45:06.147094 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 15:45:06 crc kubenswrapper[4825]: I0122 15:45:06.215934 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 15:45:07 crc kubenswrapper[4825]: I0122 15:45:07.538208 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" path="/var/lib/kubelet/pods/10c3d4a5-599e-4f58-9062-9095ea1afd1a/volumes" Jan 22 15:45:07 crc kubenswrapper[4825]: I0122 15:45:07.539374 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db48ac29-2967-41a0-9512-9317757070a9" path="/var/lib/kubelet/pods/db48ac29-2967-41a0-9512-9317757070a9/volumes" Jan 22 15:45:07 crc kubenswrapper[4825]: I0122 15:45:07.631508 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:45:11 crc kubenswrapper[4825]: I0122 15:45:11.538033 4825 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0899ccaa-6936-4c34-92d3-e579cb0f0bea"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0899ccaa-6936-4c34-92d3-e579cb0f0bea] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0899ccaa_6936_4c34_92d3_e579cb0f0bea.slice" Jan 22 15:45:11 crc kubenswrapper[4825]: E0122 15:45:11.538593 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0899ccaa-6936-4c34-92d3-e579cb0f0bea] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0899ccaa-6936-4c34-92d3-e579cb0f0bea] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0899ccaa_6936_4c34_92d3_e579cb0f0bea.slice" pod="openstack/dnsmasq-dns-cf78879c9-55djf" podUID="0899ccaa-6936-4c34-92d3-e579cb0f0bea" Jan 22 15:45:11 crc kubenswrapper[4825]: I0122 15:45:11.749547 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-55djf" Jan 22 15:45:11 crc kubenswrapper[4825]: I0122 15:45:11.806024 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-55djf"] Jan 22 15:45:11 crc kubenswrapper[4825]: I0122 15:45:11.817410 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-55djf"] Jan 22 15:45:12 crc kubenswrapper[4825]: W0122 15:45:12.163122 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82d01009_6c3f_4fc0_9fdb_834e14ad78a2.slice/crio-1ef573f8c2ccd1e40c3e27c3b76c9ef5112945c1f8d2e8a06e6d425a5eec2e9e WatchSource:0}: Error finding container 1ef573f8c2ccd1e40c3e27c3b76c9ef5112945c1f8d2e8a06e6d425a5eec2e9e: Status 404 returned error can't find the container with id 1ef573f8c2ccd1e40c3e27c3b76c9ef5112945c1f8d2e8a06e6d425a5eec2e9e Jan 22 15:45:12 crc kubenswrapper[4825]: W0122 15:45:12.172759 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffacb6a6_bce4_41f5_b611_1b0e80970b36.slice/crio-04d1eda05306cfd09d660e677a74cea2e72f22459ccb7de050b298448e575b10 WatchSource:0}: Error finding container 04d1eda05306cfd09d660e677a74cea2e72f22459ccb7de050b298448e575b10: Status 404 returned error can't find the container with id 04d1eda05306cfd09d660e677a74cea2e72f22459ccb7de050b298448e575b10 Jan 22 15:45:12 crc kubenswrapper[4825]: W0122 15:45:12.178437 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20602869_cdc8_49cb_82ae_36d1c720f637.slice/crio-4f3a93dce17bd1d4c4e28debffb82957d141af2a90e3ec649b3c0e8a15f2d038 WatchSource:0}: Error finding container 4f3a93dce17bd1d4c4e28debffb82957d141af2a90e3ec649b3c0e8a15f2d038: Status 404 returned error can't find the container with id 4f3a93dce17bd1d4c4e28debffb82957d141af2a90e3ec649b3c0e8a15f2d038 Jan 22 15:45:12 crc kubenswrapper[4825]: W0122 15:45:12.179808 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e71e054_4364_4dc1_9eee_8ff7f6cac148.slice/crio-0b558b79cd79b64731f7dcfb7430f27e21ff83650058c2c69101204b3789380b WatchSource:0}: Error finding container 0b558b79cd79b64731f7dcfb7430f27e21ff83650058c2c69101204b3789380b: Status 404 returned error can't find the container with id 0b558b79cd79b64731f7dcfb7430f27e21ff83650058c2c69101204b3789380b Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.205193 4825 scope.go:117] "RemoveContainer" containerID="7a439a656d03eea0e003c2fffeeb76bcdcabdadc037f986b8c6752f41bae7a0a" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.366730 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.505449 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-certs\") pod \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.505957 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mst8x\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-kube-api-access-mst8x\") pod \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.506187 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-scripts\") pod \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.506279 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-combined-ca-bundle\") pod \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.506319 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-config-data\") pod \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\" (UID: \"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e\") " Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.511172 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-kube-api-access-mst8x" (OuterVolumeSpecName: "kube-api-access-mst8x") pod "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" (UID: "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e"). InnerVolumeSpecName "kube-api-access-mst8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.512060 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-scripts" (OuterVolumeSpecName: "scripts") pod "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" (UID: "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.512698 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-certs" (OuterVolumeSpecName: "certs") pod "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" (UID: "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.543876 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-config-data" (OuterVolumeSpecName: "config-data") pod "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" (UID: "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.558036 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" (UID: "52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.608633 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.608666 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.608680 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.608691 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.608701 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mst8x\" (UniqueName: \"kubernetes.io/projected/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e-kube-api-access-mst8x\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.759637 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20602869-cdc8-49cb-82ae-36d1c720f637","Type":"ContainerStarted","Data":"4f3a93dce17bd1d4c4e28debffb82957d141af2a90e3ec649b3c0e8a15f2d038"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.761005 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" event={"ID":"82d01009-6c3f-4fc0-9fdb-834e14ad78a2","Type":"ContainerStarted","Data":"1ef573f8c2ccd1e40c3e27c3b76c9ef5112945c1f8d2e8a06e6d425a5eec2e9e"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.762019 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fdbbbd487-qbcwc" event={"ID":"d0008df0-93d9-43ac-b31b-3eed1b711628","Type":"ContainerStarted","Data":"42eaed294870d84cd2f90ff75d608088b2263409b7b885c5ae7a2485177b33ec"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.763972 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-894b498b5-mnnlr" event={"ID":"349354a1-c3d7-4f6a-b85a-3a7b490b98da","Type":"ContainerStarted","Data":"5078a7ae0c243899f12f914cb8670dc22511a8212d0531e3e95b5bf3c4727e6b"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.765417 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffacb6a6-bce4-41f5-b611-1b0e80970b36","Type":"ContainerStarted","Data":"04d1eda05306cfd09d660e677a74cea2e72f22459ccb7de050b298448e575b10"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.766467 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647867566-jbd62" event={"ID":"4cc36209-9086-4104-ac1c-0483ff8f05e6","Type":"ContainerStarted","Data":"f38e15f796b5fb09d6af046eba9a8c5fee195c3f63f109213b336057d40ca10f"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.769230 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-cshtw" event={"ID":"52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e","Type":"ContainerDied","Data":"5bf2e54c15e1004ba3028e4f9470c6b103592565f3d6c8e1ca192ef34da4ba83"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.769275 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf2e54c15e1004ba3028e4f9470c6b103592565f3d6c8e1ca192ef34da4ba83" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.769244 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-cshtw" Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.770304 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f54bfddd7-7dxc5" event={"ID":"731b7ae4-c576-4832-851c-0a832ad56e31","Type":"ContainerStarted","Data":"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3"} Jan 22 15:45:12 crc kubenswrapper[4825]: I0122 15:45:12.771565 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" event={"ID":"0e71e054-4364-4dc1-9eee-8ff7f6cac148","Type":"ContainerStarted","Data":"0b558b79cd79b64731f7dcfb7430f27e21ff83650058c2c69101204b3789380b"} Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.163358 4825 scope.go:117] "RemoveContainer" containerID="1ff03f1fe6fa44470877c2137fceeb7b6addcc9ccd173669865f58308055469e" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.446376 4825 scope.go:117] "RemoveContainer" containerID="04ac521c4eb65249f7edb27177d6bbb3fac4619fb171e7799b60c28c47a08915" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.487564 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-98v87"] Jan 22 15:45:13 crc kubenswrapper[4825]: E0122 15:45:13.488626 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-httpd" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.488676 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-httpd" Jan 22 15:45:13 crc kubenswrapper[4825]: E0122 15:45:13.488693 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" containerName="cloudkitty-db-sync" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.488700 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" containerName="cloudkitty-db-sync" Jan 22 15:45:13 crc kubenswrapper[4825]: E0122 15:45:13.488708 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerName="dnsmasq-dns" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.488713 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerName="dnsmasq-dns" Jan 22 15:45:13 crc kubenswrapper[4825]: E0122 15:45:13.488727 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-api" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.488732 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-api" Jan 22 15:45:13 crc kubenswrapper[4825]: E0122 15:45:13.488754 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerName="init" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.488759 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerName="init" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.489748 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-httpd" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.489770 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" containerName="cloudkitty-db-sync" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.489781 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="db48ac29-2967-41a0-9512-9317757070a9" containerName="neutron-api" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.489794 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c3d4a5-599e-4f58-9062-9095ea1afd1a" containerName="dnsmasq-dns" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.490505 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.492830 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.493675 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.493894 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-nn5s9" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.495306 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.495461 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.545429 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-config-data\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.545547 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-certs\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.545589 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8x45\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-kube-api-access-v8x45\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.545665 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-combined-ca-bundle\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.545758 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-scripts\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.596011 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0899ccaa-6936-4c34-92d3-e579cb0f0bea" path="/var/lib/kubelet/pods/0899ccaa-6936-4c34-92d3-e579cb0f0bea/volumes" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.596867 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-98v87"] Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.649700 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-scripts\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.649873 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-config-data\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.649903 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-certs\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.649935 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8x45\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-kube-api-access-v8x45\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.649973 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-combined-ca-bundle\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.655036 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.655373 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.658229 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-combined-ca-bundle\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.660158 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.669964 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-certs\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.670005 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8x45\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-kube-api-access-v8x45\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.672348 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-config-data\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.683422 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-scripts\") pod \"cloudkitty-storageinit-98v87\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.789259 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fdbbbd487-qbcwc" event={"ID":"d0008df0-93d9-43ac-b31b-3eed1b711628","Type":"ContainerStarted","Data":"ce3372e233df0ff3f83f9c9d9607c44a51e64ff977ed0fe2abfa8d4ecad1355d"} Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.800058 4825 generic.go:334] "Generic (PLEG): container finished" podID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerID="13d927fe6ab522fa2af52dac81996b832c30af3f91e15fc62efc244d36a20eb2" exitCode=0 Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.800117 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" event={"ID":"0e71e054-4364-4dc1-9eee-8ff7f6cac148","Type":"ContainerDied","Data":"13d927fe6ab522fa2af52dac81996b832c30af3f91e15fc62efc244d36a20eb2"} Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.806605 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647867566-jbd62" event={"ID":"4cc36209-9086-4104-ac1c-0483ff8f05e6","Type":"ContainerStarted","Data":"f41c583ef02f7ce429961ca5f94c7bb6a68b76991bbffeafa81d49a883d1a77c"} Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.812472 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" event={"ID":"82d01009-6c3f-4fc0-9fdb-834e14ad78a2","Type":"ContainerStarted","Data":"44e408cf7ec021903f1402bc8c31cfbd12032f24c6d91eab9e76f91a6305478a"} Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.868290 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" podStartSLOduration=13.868266066 podStartE2EDuration="13.868266066s" podCreationTimestamp="2026-01-22 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:13.846328666 +0000 UTC m=+1260.607855586" watchObservedRunningTime="2026-01-22 15:45:13.868266066 +0000 UTC m=+1260.629792976" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.941306 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-nn5s9" Jan 22 15:45:13 crc kubenswrapper[4825]: I0122 15:45:13.949464 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:14 crc kubenswrapper[4825]: E0122 15:45:14.199268 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.688092 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-98v87"] Jan 22 15:45:14 crc kubenswrapper[4825]: W0122 15:45:14.709412 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod287a38fa_f643_4202_8c80_73080c77388c.slice/crio-b698e877bd7e982315cb816cc09cd4d60161d2d95f1b9a4e753135d58151b66a WatchSource:0}: Error finding container b698e877bd7e982315cb816cc09cd4d60161d2d95f1b9a4e753135d58151b66a: Status 404 returned error can't find the container with id b698e877bd7e982315cb816cc09cd4d60161d2d95f1b9a4e753135d58151b66a Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.715314 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.829264 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerStarted","Data":"8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.829424 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="ceilometer-notification-agent" containerID="cri-o://9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769" gracePeriod=30 Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.829781 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.830085 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="sg-core" containerID="cri-o://02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579" gracePeriod=30 Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.830136 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="proxy-httpd" containerID="cri-o://8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c" gracePeriod=30 Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.833910 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-894b498b5-mnnlr" event={"ID":"349354a1-c3d7-4f6a-b85a-3a7b490b98da","Type":"ContainerStarted","Data":"ea18eee17170e0b116fc6a0b91c201ea482c54e9bad76cfaf3c20db6b4f5dd4f"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.838144 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffacb6a6-bce4-41f5-b611-1b0e80970b36","Type":"ContainerStarted","Data":"7b491e19054d24a1cf37051183a8bcd6a79820ef6957ff4766704ca990bbf6dd"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.842506 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" event={"ID":"0e71e054-4364-4dc1-9eee-8ff7f6cac148","Type":"ContainerStarted","Data":"78116bdacf77100afbcead00f940e4166d91e825c997bf04466f4e3a30c6913d"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.843492 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.847051 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-98v87" event={"ID":"287a38fa-f643-4202-8c80-73080c77388c","Type":"ContainerStarted","Data":"b698e877bd7e982315cb816cc09cd4d60161d2d95f1b9a4e753135d58151b66a"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.870285 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647867566-jbd62" event={"ID":"4cc36209-9086-4104-ac1c-0483ff8f05e6","Type":"ContainerStarted","Data":"f5d3bf615ae14f196611b4b7506d309752a756475b1c8e572bc0eab774918640"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.871165 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.871202 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.895250 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" event={"ID":"ce3ac0ed-81fc-479f-b4b5-2448549178d2","Type":"ContainerStarted","Data":"3bf6addf318eb95513170034d30e4e63aae1a3cd41e3c9bccd7e69f832b5abbb"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.924991 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-894b498b5-mnnlr" podStartSLOduration=14.330640637 podStartE2EDuration="21.924949663s" podCreationTimestamp="2026-01-22 15:44:53 +0000 UTC" firstStartedPulling="2026-01-22 15:44:56.545423481 +0000 UTC m=+1243.306950391" lastFinishedPulling="2026-01-22 15:45:04.139732507 +0000 UTC m=+1250.901259417" observedRunningTime="2026-01-22 15:45:14.875504994 +0000 UTC m=+1261.637031904" watchObservedRunningTime="2026-01-22 15:45:14.924949663 +0000 UTC m=+1261.686476573" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.939679 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f54bfddd7-7dxc5" event={"ID":"731b7ae4-c576-4832-851c-0a832ad56e31","Type":"ContainerStarted","Data":"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.963748 4825 generic.go:334] "Generic (PLEG): container finished" podID="82d01009-6c3f-4fc0-9fdb-834e14ad78a2" containerID="44e408cf7ec021903f1402bc8c31cfbd12032f24c6d91eab9e76f91a6305478a" exitCode=0 Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.963858 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" event={"ID":"82d01009-6c3f-4fc0-9fdb-834e14ad78a2","Type":"ContainerDied","Data":"44e408cf7ec021903f1402bc8c31cfbd12032f24c6d91eab9e76f91a6305478a"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.977589 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fdbbbd487-qbcwc" event={"ID":"d0008df0-93d9-43ac-b31b-3eed1b711628","Type":"ContainerStarted","Data":"bf540fd6def4a7d4df95957ef703e81ecdddbefc169007907a53188aa3ec7a21"} Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.981495 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.981821 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" podStartSLOduration=15.981798674 podStartE2EDuration="15.981798674s" podCreationTimestamp="2026-01-22 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:14.910878659 +0000 UTC m=+1261.672405569" watchObservedRunningTime="2026-01-22 15:45:14.981798674 +0000 UTC m=+1261.743325574" Jan 22 15:45:14 crc kubenswrapper[4825]: I0122 15:45:14.997508 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" event={"ID":"1ccd62bc-d183-4918-91d6-fd5be08f6dc1","Type":"ContainerStarted","Data":"ca1ea9ab83a48f49dfb523bbf1d9be9713c880fc12d044603a68f2fce808767f"} Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.149741 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f54bfddd7-7dxc5"] Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.158547 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" podStartSLOduration=13.74635713 podStartE2EDuration="22.158521056s" podCreationTimestamp="2026-01-22 15:44:53 +0000 UTC" firstStartedPulling="2026-01-22 15:44:55.499707321 +0000 UTC m=+1242.261234231" lastFinishedPulling="2026-01-22 15:45:03.911871247 +0000 UTC m=+1250.673398157" observedRunningTime="2026-01-22 15:45:14.93948419 +0000 UTC m=+1261.701011110" watchObservedRunningTime="2026-01-22 15:45:15.158521056 +0000 UTC m=+1261.920047966" Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.219667 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-647867566-jbd62" podStartSLOduration=14.219642631 podStartE2EDuration="14.219642631s" podCreationTimestamp="2026-01-22 15:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:14.971769327 +0000 UTC m=+1261.733296257" watchObservedRunningTime="2026-01-22 15:45:15.219642631 +0000 UTC m=+1261.981169541" Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.279565 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f54bfddd7-7dxc5" podStartSLOduration=13.758283733 podStartE2EDuration="22.27953645s" podCreationTimestamp="2026-01-22 15:44:53 +0000 UTC" firstStartedPulling="2026-01-22 15:44:55.498048343 +0000 UTC m=+1242.259575253" lastFinishedPulling="2026-01-22 15:45:04.01930106 +0000 UTC m=+1250.780827970" observedRunningTime="2026-01-22 15:45:15.047912852 +0000 UTC m=+1261.809439772" watchObservedRunningTime="2026-01-22 15:45:15.27953645 +0000 UTC m=+1262.041063360" Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.333511 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-76545dccfd-vqph7" podStartSLOduration=14.285971388 podStartE2EDuration="22.333481708s" podCreationTimestamp="2026-01-22 15:44:53 +0000 UTC" firstStartedPulling="2026-01-22 15:44:56.091018712 +0000 UTC m=+1242.852545622" lastFinishedPulling="2026-01-22 15:45:04.138529032 +0000 UTC m=+1250.900055942" observedRunningTime="2026-01-22 15:45:15.104499716 +0000 UTC m=+1261.866026626" watchObservedRunningTime="2026-01-22 15:45:15.333481708 +0000 UTC m=+1262.095008618" Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.375826 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fdbbbd487-qbcwc" podStartSLOduration=15.375807743 podStartE2EDuration="15.375807743s" podCreationTimestamp="2026-01-22 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:15.128472514 +0000 UTC m=+1261.889999424" watchObservedRunningTime="2026-01-22 15:45:15.375807743 +0000 UTC m=+1262.137334653" Jan 22 15:45:15 crc kubenswrapper[4825]: I0122 15:45:15.402064 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57fcb6778b-725vd"] Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.017995 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20602869-cdc8-49cb-82ae-36d1c720f637","Type":"ContainerStarted","Data":"25bad4877bc71685f3451e6177320f68bced3d83a63707e09a9b6f959cd1a06b"} Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.018242 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20602869-cdc8-49cb-82ae-36d1c720f637","Type":"ContainerStarted","Data":"754a8e91784326cb73134050a4853ab3fa9c517cc0b1683100ee1198428c939c"} Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.022466 4825 generic.go:334] "Generic (PLEG): container finished" podID="e224d84c-5d76-451a-9c81-bdf42336c375" containerID="8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c" exitCode=0 Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.022498 4825 generic.go:334] "Generic (PLEG): container finished" podID="e224d84c-5d76-451a-9c81-bdf42336c375" containerID="02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579" exitCode=2 Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.022560 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerDied","Data":"8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c"} Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.022586 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerDied","Data":"02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579"} Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.025364 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffacb6a6-bce4-41f5-b611-1b0e80970b36","Type":"ContainerStarted","Data":"2acba36dacbe61b9fa53cf1073c48aac2ec5bb977130cd3e02173c02a493fd2d"} Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.025435 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.025445 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api-log" containerID="cri-o://7b491e19054d24a1cf37051183a8bcd6a79820ef6957ff4766704ca990bbf6dd" gracePeriod=30 Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.025496 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api" containerID="cri-o://2acba36dacbe61b9fa53cf1073c48aac2ec5bb977130cd3e02173c02a493fd2d" gracePeriod=30 Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.029647 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-98v87" event={"ID":"287a38fa-f643-4202-8c80-73080c77388c","Type":"ContainerStarted","Data":"aea3f64dc09134f3d6a7d83c91edb8129195bd4464169db681f14320c5e9d088"} Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.045942 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=15.56355287 podStartE2EDuration="17.045924356s" podCreationTimestamp="2026-01-22 15:44:59 +0000 UTC" firstStartedPulling="2026-01-22 15:45:12.181775891 +0000 UTC m=+1258.943302821" lastFinishedPulling="2026-01-22 15:45:13.664147397 +0000 UTC m=+1260.425674307" observedRunningTime="2026-01-22 15:45:16.042862948 +0000 UTC m=+1262.804389858" watchObservedRunningTime="2026-01-22 15:45:16.045924356 +0000 UTC m=+1262.807451266" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.076703 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-98v87" podStartSLOduration=3.076661218 podStartE2EDuration="3.076661218s" podCreationTimestamp="2026-01-22 15:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:16.060855815 +0000 UTC m=+1262.822382755" watchObservedRunningTime="2026-01-22 15:45:16.076661218 +0000 UTC m=+1262.838188128" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.085454 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=17.08542443 podStartE2EDuration="17.08542443s" podCreationTimestamp="2026-01-22 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:16.08089671 +0000 UTC m=+1262.842423620" watchObservedRunningTime="2026-01-22 15:45:16.08542443 +0000 UTC m=+1262.846951340" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.491937 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.694121 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-config-volume\") pod \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.694507 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75bdq\" (UniqueName: \"kubernetes.io/projected/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-kube-api-access-75bdq\") pod \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.694615 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-secret-volume\") pod \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\" (UID: \"82d01009-6c3f-4fc0-9fdb-834e14ad78a2\") " Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.694810 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-config-volume" (OuterVolumeSpecName: "config-volume") pod "82d01009-6c3f-4fc0-9fdb-834e14ad78a2" (UID: "82d01009-6c3f-4fc0-9fdb-834e14ad78a2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.695565 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.700542 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-kube-api-access-75bdq" (OuterVolumeSpecName: "kube-api-access-75bdq") pod "82d01009-6c3f-4fc0-9fdb-834e14ad78a2" (UID: "82d01009-6c3f-4fc0-9fdb-834e14ad78a2"). InnerVolumeSpecName "kube-api-access-75bdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.701128 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82d01009-6c3f-4fc0-9fdb-834e14ad78a2" (UID: "82d01009-6c3f-4fc0-9fdb-834e14ad78a2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.796380 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75bdq\" (UniqueName: \"kubernetes.io/projected/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-kube-api-access-75bdq\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:16 crc kubenswrapper[4825]: I0122 15:45:16.796420 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82d01009-6c3f-4fc0-9fdb-834e14ad78a2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.038844 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" event={"ID":"82d01009-6c3f-4fc0-9fdb-834e14ad78a2","Type":"ContainerDied","Data":"1ef573f8c2ccd1e40c3e27c3b76c9ef5112945c1f8d2e8a06e6d425a5eec2e9e"} Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.038884 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4" Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.038899 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef573f8c2ccd1e40c3e27c3b76c9ef5112945c1f8d2e8a06e6d425a5eec2e9e" Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.043923 4825 generic.go:334] "Generic (PLEG): container finished" podID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerID="7b491e19054d24a1cf37051183a8bcd6a79820ef6957ff4766704ca990bbf6dd" exitCode=143 Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.044097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffacb6a6-bce4-41f5-b611-1b0e80970b36","Type":"ContainerDied","Data":"7b491e19054d24a1cf37051183a8bcd6a79820ef6957ff4766704ca990bbf6dd"} Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.044283 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener-log" containerID="cri-o://53b3f499ed24bf51364d752406df5edab9a457a04f767fb65a743cff000ca988" gracePeriod=30 Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.044459 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener" containerID="cri-o://3bf6addf318eb95513170034d30e4e63aae1a3cd41e3c9bccd7e69f832b5abbb" gracePeriod=30 Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.045043 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f54bfddd7-7dxc5" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker-log" containerID="cri-o://dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3" gracePeriod=30 Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.045473 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f54bfddd7-7dxc5" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker" containerID="cri-o://d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c" gracePeriod=30 Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.985260 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:17 crc kubenswrapper[4825]: I0122 15:45:17.989426 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055487 4825 generic.go:334] "Generic (PLEG): container finished" podID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerID="3bf6addf318eb95513170034d30e4e63aae1a3cd41e3c9bccd7e69f832b5abbb" exitCode=0 Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055513 4825 generic.go:334] "Generic (PLEG): container finished" podID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerID="53b3f499ed24bf51364d752406df5edab9a457a04f767fb65a743cff000ca988" exitCode=143 Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055525 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" event={"ID":"ce3ac0ed-81fc-479f-b4b5-2448549178d2","Type":"ContainerDied","Data":"3bf6addf318eb95513170034d30e4e63aae1a3cd41e3c9bccd7e69f832b5abbb"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055642 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" event={"ID":"ce3ac0ed-81fc-479f-b4b5-2448549178d2","Type":"ContainerDied","Data":"53b3f499ed24bf51364d752406df5edab9a457a04f767fb65a743cff000ca988"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055657 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" event={"ID":"ce3ac0ed-81fc-479f-b4b5-2448549178d2","Type":"ContainerDied","Data":"e853b43ab16952656184677c9e65e25af280869c2eb289d2d588ed77d2b6e03c"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055669 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e853b43ab16952656184677c9e65e25af280869c2eb289d2d588ed77d2b6e03c" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.055712 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057081 4825 generic.go:334] "Generic (PLEG): container finished" podID="731b7ae4-c576-4832-851c-0a832ad56e31" containerID="d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c" exitCode=0 Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057096 4825 generic.go:334] "Generic (PLEG): container finished" podID="731b7ae4-c576-4832-851c-0a832ad56e31" containerID="dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3" exitCode=143 Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057125 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f54bfddd7-7dxc5" event={"ID":"731b7ae4-c576-4832-851c-0a832ad56e31","Type":"ContainerDied","Data":"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057145 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f54bfddd7-7dxc5" event={"ID":"731b7ae4-c576-4832-851c-0a832ad56e31","Type":"ContainerDied","Data":"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057155 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f54bfddd7-7dxc5" event={"ID":"731b7ae4-c576-4832-851c-0a832ad56e31","Type":"ContainerDied","Data":"99227ab0f96fa866212b0c4e3f027a152bcd416c3a7e29af4afa7db1e80f479c"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057170 4825 scope.go:117] "RemoveContainer" containerID="d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.057210 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f54bfddd7-7dxc5" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.060811 4825 generic.go:334] "Generic (PLEG): container finished" podID="e224d84c-5d76-451a-9c81-bdf42336c375" containerID="9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769" exitCode=0 Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.060838 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerDied","Data":"9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.060858 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e224d84c-5d76-451a-9c81-bdf42336c375","Type":"ContainerDied","Data":"886c52e70108b27db693d2a51d60620639aef7096c69ad261f1eb4796bde0c0f"} Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.060906 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.092322 4825 scope.go:117] "RemoveContainer" containerID="dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.118857 4825 scope.go:117] "RemoveContainer" containerID="d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.119365 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c\": container with ID starting with d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c not found: ID does not exist" containerID="d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.119395 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c"} err="failed to get container status \"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c\": rpc error: code = NotFound desc = could not find container \"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c\": container with ID starting with d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.119414 4825 scope.go:117] "RemoveContainer" containerID="dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.119748 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3\": container with ID starting with dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3 not found: ID does not exist" containerID="dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.119764 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3"} err="failed to get container status \"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3\": rpc error: code = NotFound desc = could not find container \"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3\": container with ID starting with dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3 not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.119778 4825 scope.go:117] "RemoveContainer" containerID="d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.119939 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c"} err="failed to get container status \"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c\": rpc error: code = NotFound desc = could not find container \"d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c\": container with ID starting with d185e7cb9163fc26103d2486d19e88bb13660f022cc69a3ad462bee4df83d97c not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.119954 4825 scope.go:117] "RemoveContainer" containerID="dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.120228 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3"} err="failed to get container status \"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3\": rpc error: code = NotFound desc = could not find container \"dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3\": container with ID starting with dfffdd66fdd1b6bffe47bbbec4847b2a4d30eca1e4ce5c62a6e5f321cda270f3 not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.120244 4825 scope.go:117] "RemoveContainer" containerID="8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.138261 4825 scope.go:117] "RemoveContainer" containerID="02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.145712 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data\") pod \"731b7ae4-c576-4832-851c-0a832ad56e31\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.145788 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dckz9\" (UniqueName: \"kubernetes.io/projected/e224d84c-5d76-451a-9c81-bdf42336c375-kube-api-access-dckz9\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.145864 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-config-data\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.145941 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-combined-ca-bundle\") pod \"731b7ae4-c576-4832-851c-0a832ad56e31\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146007 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-scripts\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146087 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-combined-ca-bundle\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146137 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng625\" (UniqueName: \"kubernetes.io/projected/731b7ae4-c576-4832-851c-0a832ad56e31-kube-api-access-ng625\") pod \"731b7ae4-c576-4832-851c-0a832ad56e31\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146163 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-sg-core-conf-yaml\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146203 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-run-httpd\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146253 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-log-httpd\") pod \"e224d84c-5d76-451a-9c81-bdf42336c375\" (UID: \"e224d84c-5d76-451a-9c81-bdf42336c375\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146299 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data-custom\") pod \"731b7ae4-c576-4832-851c-0a832ad56e31\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.146352 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731b7ae4-c576-4832-851c-0a832ad56e31-logs\") pod \"731b7ae4-c576-4832-851c-0a832ad56e31\" (UID: \"731b7ae4-c576-4832-851c-0a832ad56e31\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.147300 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731b7ae4-c576-4832-851c-0a832ad56e31-logs" (OuterVolumeSpecName: "logs") pod "731b7ae4-c576-4832-851c-0a832ad56e31" (UID: "731b7ae4-c576-4832-851c-0a832ad56e31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.148081 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.148150 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.151865 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-scripts" (OuterVolumeSpecName: "scripts") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.151929 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731b7ae4-c576-4832-851c-0a832ad56e31-kube-api-access-ng625" (OuterVolumeSpecName: "kube-api-access-ng625") pod "731b7ae4-c576-4832-851c-0a832ad56e31" (UID: "731b7ae4-c576-4832-851c-0a832ad56e31"). InnerVolumeSpecName "kube-api-access-ng625". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.152418 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e224d84c-5d76-451a-9c81-bdf42336c375-kube-api-access-dckz9" (OuterVolumeSpecName: "kube-api-access-dckz9") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "kube-api-access-dckz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.152616 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "731b7ae4-c576-4832-851c-0a832ad56e31" (UID: "731b7ae4-c576-4832-851c-0a832ad56e31"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.161299 4825 scope.go:117] "RemoveContainer" containerID="9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.177032 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "731b7ae4-c576-4832-851c-0a832ad56e31" (UID: "731b7ae4-c576-4832-851c-0a832ad56e31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.178904 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.187538 4825 scope.go:117] "RemoveContainer" containerID="8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.188145 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c\": container with ID starting with 8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c not found: ID does not exist" containerID="8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.188266 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c"} err="failed to get container status \"8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c\": rpc error: code = NotFound desc = could not find container \"8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c\": container with ID starting with 8c86cfee093c0059549c68aabbd41c17f7222d140ef92e17d2495d5855c2245c not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.188331 4825 scope.go:117] "RemoveContainer" containerID="02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.188796 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579\": container with ID starting with 02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579 not found: ID does not exist" containerID="02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.188866 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579"} err="failed to get container status \"02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579\": rpc error: code = NotFound desc = could not find container \"02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579\": container with ID starting with 02cf9d55053bcb0b9d46e4b373432f76568863dc59f7ad047d7ebf7697fbb579 not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.188896 4825 scope.go:117] "RemoveContainer" containerID="9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.189270 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769\": container with ID starting with 9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769 not found: ID does not exist" containerID="9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.189307 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769"} err="failed to get container status \"9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769\": rpc error: code = NotFound desc = could not find container \"9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769\": container with ID starting with 9282688fd7dddaf6955498af53a185ae2ce898ba0a13961f469ce5cdd5966769 not found: ID does not exist" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.204610 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data" (OuterVolumeSpecName: "config-data") pod "731b7ae4-c576-4832-851c-0a832ad56e31" (UID: "731b7ae4-c576-4832-851c-0a832ad56e31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.214698 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.229375 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-config-data" (OuterVolumeSpecName: "config-data") pod "e224d84c-5d76-451a-9c81-bdf42336c375" (UID: "e224d84c-5d76-451a-9c81-bdf42336c375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.248329 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data\") pod \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.248447 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce3ac0ed-81fc-479f-b4b5-2448549178d2-logs\") pod \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.248532 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data-custom\") pod \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.248716 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lz46\" (UniqueName: \"kubernetes.io/projected/ce3ac0ed-81fc-479f-b4b5-2448549178d2-kube-api-access-5lz46\") pod \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.248745 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-combined-ca-bundle\") pod \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\" (UID: \"ce3ac0ed-81fc-479f-b4b5-2448549178d2\") " Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249179 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3ac0ed-81fc-479f-b4b5-2448549178d2-logs" (OuterVolumeSpecName: "logs") pod "ce3ac0ed-81fc-479f-b4b5-2448549178d2" (UID: "ce3ac0ed-81fc-479f-b4b5-2448549178d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249228 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249380 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249466 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249525 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249581 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng625\" (UniqueName: \"kubernetes.io/projected/731b7ae4-c576-4832-851c-0a832ad56e31-kube-api-access-ng625\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249636 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e224d84c-5d76-451a-9c81-bdf42336c375-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249695 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249762 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e224d84c-5d76-451a-9c81-bdf42336c375-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249818 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249876 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731b7ae4-c576-4832-851c-0a832ad56e31-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.249936 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731b7ae4-c576-4832-851c-0a832ad56e31-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.250011 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dckz9\" (UniqueName: \"kubernetes.io/projected/e224d84c-5d76-451a-9c81-bdf42336c375-kube-api-access-dckz9\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.260033 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce3ac0ed-81fc-479f-b4b5-2448549178d2" (UID: "ce3ac0ed-81fc-479f-b4b5-2448549178d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.260470 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3ac0ed-81fc-479f-b4b5-2448549178d2-kube-api-access-5lz46" (OuterVolumeSpecName: "kube-api-access-5lz46") pod "ce3ac0ed-81fc-479f-b4b5-2448549178d2" (UID: "ce3ac0ed-81fc-479f-b4b5-2448549178d2"). InnerVolumeSpecName "kube-api-access-5lz46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.291176 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce3ac0ed-81fc-479f-b4b5-2448549178d2" (UID: "ce3ac0ed-81fc-479f-b4b5-2448549178d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.307891 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data" (OuterVolumeSpecName: "config-data") pod "ce3ac0ed-81fc-479f-b4b5-2448549178d2" (UID: "ce3ac0ed-81fc-479f-b4b5-2448549178d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.352060 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.352095 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce3ac0ed-81fc-479f-b4b5-2448549178d2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.352103 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.352113 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lz46\" (UniqueName: \"kubernetes.io/projected/ce3ac0ed-81fc-479f-b4b5-2448549178d2-kube-api-access-5lz46\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.352121 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3ac0ed-81fc-479f-b4b5-2448549178d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.401857 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f54bfddd7-7dxc5"] Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.421298 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-f54bfddd7-7dxc5"] Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.539132 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.585652 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.620831 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621381 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="ceilometer-notification-agent" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621403 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="ceilometer-notification-agent" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621427 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621434 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621443 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker-log" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621450 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker-log" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621467 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener-log" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621474 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener-log" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621489 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="sg-core" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621496 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="sg-core" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621511 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d01009-6c3f-4fc0-9fdb-834e14ad78a2" containerName="collect-profiles" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621523 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d01009-6c3f-4fc0-9fdb-834e14ad78a2" containerName="collect-profiles" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621542 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="proxy-httpd" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621549 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="proxy-httpd" Jan 22 15:45:18 crc kubenswrapper[4825]: E0122 15:45:18.621559 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621568 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621780 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="ceilometer-notification-agent" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621797 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621815 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="sg-core" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621826 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621837 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" containerName="barbican-keystone-listener-log" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621844 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" containerName="proxy-httpd" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621860 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d01009-6c3f-4fc0-9fdb-834e14ad78a2" containerName="collect-profiles" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.621872 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" containerName="barbican-worker-log" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.624881 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.627861 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.628162 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.640860 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.688656 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-config-data\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.688741 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-log-httpd\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.688884 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.688929 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrfj\" (UniqueName: \"kubernetes.io/projected/f236d595-13a1-4b4d-a37c-9fe0644907c7-kube-api-access-cwrfj\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.688970 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.689044 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-scripts\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.689066 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-run-httpd\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790049 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790130 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwrfj\" (UniqueName: \"kubernetes.io/projected/f236d595-13a1-4b4d-a37c-9fe0644907c7-kube-api-access-cwrfj\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790179 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790224 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-scripts\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790250 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-run-httpd\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790327 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-config-data\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790367 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-log-httpd\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790845 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-log-httpd\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.790882 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-run-httpd\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.795034 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.795230 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.797946 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-config-data\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.803421 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-scripts\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.809941 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwrfj\" (UniqueName: \"kubernetes.io/projected/f236d595-13a1-4b4d-a37c-9fe0644907c7-kube-api-access-cwrfj\") pod \"ceilometer-0\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " pod="openstack/ceilometer-0" Jan 22 15:45:18 crc kubenswrapper[4825]: I0122 15:45:18.945136 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.076167 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57fcb6778b-725vd" Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.195966 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57fcb6778b-725vd"] Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.208529 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-57fcb6778b-725vd"] Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.496754 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.558895 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731b7ae4-c576-4832-851c-0a832ad56e31" path="/var/lib/kubelet/pods/731b7ae4-c576-4832-851c-0a832ad56e31/volumes" Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.559887 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3ac0ed-81fc-479f-b4b5-2448549178d2" path="/var/lib/kubelet/pods/ce3ac0ed-81fc-479f-b4b5-2448549178d2/volumes" Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.560699 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e224d84c-5d76-451a-9c81-bdf42336c375" path="/var/lib/kubelet/pods/e224d84c-5d76-451a-9c81-bdf42336c375/volumes" Jan 22 15:45:19 crc kubenswrapper[4825]: I0122 15:45:19.996182 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.100515 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerStarted","Data":"11df356c0e717b03adb13c346a8ac0742556f755f574fb3a8f1e3b9d7bb60f40"} Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.103375 4825 generic.go:334] "Generic (PLEG): container finished" podID="287a38fa-f643-4202-8c80-73080c77388c" containerID="aea3f64dc09134f3d6a7d83c91edb8129195bd4464169db681f14320c5e9d088" exitCode=0 Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.103413 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-98v87" event={"ID":"287a38fa-f643-4202-8c80-73080c77388c","Type":"ContainerDied","Data":"aea3f64dc09134f3d6a7d83c91edb8129195bd4464169db681f14320c5e9d088"} Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.131196 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.234267 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gm5qw"] Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.234539 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" containerName="dnsmasq-dns" containerID="cri-o://cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523" gracePeriod=10 Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.311611 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.401173 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:20 crc kubenswrapper[4825]: I0122 15:45:20.915281 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.045743 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-nb\") pod \"3440c5ee-21a0-480d-8960-0d60146517cb\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.045858 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-svc\") pod \"3440c5ee-21a0-480d-8960-0d60146517cb\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.045960 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-sb\") pod \"3440c5ee-21a0-480d-8960-0d60146517cb\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.046066 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-config\") pod \"3440c5ee-21a0-480d-8960-0d60146517cb\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.046149 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-swift-storage-0\") pod \"3440c5ee-21a0-480d-8960-0d60146517cb\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.046181 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s9zr\" (UniqueName: \"kubernetes.io/projected/3440c5ee-21a0-480d-8960-0d60146517cb-kube-api-access-9s9zr\") pod \"3440c5ee-21a0-480d-8960-0d60146517cb\" (UID: \"3440c5ee-21a0-480d-8960-0d60146517cb\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.054430 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3440c5ee-21a0-480d-8960-0d60146517cb-kube-api-access-9s9zr" (OuterVolumeSpecName: "kube-api-access-9s9zr") pod "3440c5ee-21a0-480d-8960-0d60146517cb" (UID: "3440c5ee-21a0-480d-8960-0d60146517cb"). InnerVolumeSpecName "kube-api-access-9s9zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.120206 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3440c5ee-21a0-480d-8960-0d60146517cb" (UID: "3440c5ee-21a0-480d-8960-0d60146517cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.137788 4825 generic.go:334] "Generic (PLEG): container finished" podID="3440c5ee-21a0-480d-8960-0d60146517cb" containerID="cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523" exitCode=0 Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.137848 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" event={"ID":"3440c5ee-21a0-480d-8960-0d60146517cb","Type":"ContainerDied","Data":"cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523"} Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.137876 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" event={"ID":"3440c5ee-21a0-480d-8960-0d60146517cb","Type":"ContainerDied","Data":"c3030df792bd39e249a7a1c96f9e0e33b969f76523637226abfe40bdafc3765d"} Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.137893 4825 scope.go:117] "RemoveContainer" containerID="cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.142068 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gm5qw" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.146468 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3440c5ee-21a0-480d-8960-0d60146517cb" (UID: "3440c5ee-21a0-480d-8960-0d60146517cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.148863 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s9zr\" (UniqueName: \"kubernetes.io/projected/3440c5ee-21a0-480d-8960-0d60146517cb-kube-api-access-9s9zr\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.148893 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.148903 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.151498 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3440c5ee-21a0-480d-8960-0d60146517cb" (UID: "3440c5ee-21a0-480d-8960-0d60146517cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.155637 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerStarted","Data":"0ee22263dd2547b6f844ef11ddafe4c6e0d5d6073c7f67bb3962ab75c0091d71"} Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.155787 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="cinder-scheduler" containerID="cri-o://754a8e91784326cb73134050a4853ab3fa9c517cc0b1683100ee1198428c939c" gracePeriod=30 Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.156164 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="probe" containerID="cri-o://25bad4877bc71685f3451e6177320f68bced3d83a63707e09a9b6f959cd1a06b" gracePeriod=30 Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.213606 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3440c5ee-21a0-480d-8960-0d60146517cb" (UID: "3440c5ee-21a0-480d-8960-0d60146517cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.252110 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.252175 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.254467 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-config" (OuterVolumeSpecName: "config") pod "3440c5ee-21a0-480d-8960-0d60146517cb" (UID: "3440c5ee-21a0-480d-8960-0d60146517cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.274010 4825 scope.go:117] "RemoveContainer" containerID="89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.312370 4825 scope.go:117] "RemoveContainer" containerID="cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523" Jan 22 15:45:21 crc kubenswrapper[4825]: E0122 15:45:21.316071 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523\": container with ID starting with cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523 not found: ID does not exist" containerID="cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.316106 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523"} err="failed to get container status \"cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523\": rpc error: code = NotFound desc = could not find container \"cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523\": container with ID starting with cdca01747763a82978a4dbc536dafac3fa8272a1028f966c730a29e691e51523 not found: ID does not exist" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.316128 4825 scope.go:117] "RemoveContainer" containerID="89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82" Jan 22 15:45:21 crc kubenswrapper[4825]: E0122 15:45:21.316703 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82\": container with ID starting with 89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82 not found: ID does not exist" containerID="89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.316751 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82"} err="failed to get container status \"89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82\": rpc error: code = NotFound desc = could not find container \"89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82\": container with ID starting with 89fd1799fd20aa6e1d9fce05981c8848657cb4b9ab66e65284e507d9b019ab82 not found: ID does not exist" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.353625 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440c5ee-21a0-480d-8960-0d60146517cb-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.562528 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gm5qw"] Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.583375 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gm5qw"] Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.604646 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.796404 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-combined-ca-bundle\") pod \"287a38fa-f643-4202-8c80-73080c77388c\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.796823 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8x45\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-kube-api-access-v8x45\") pod \"287a38fa-f643-4202-8c80-73080c77388c\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.797043 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-config-data\") pod \"287a38fa-f643-4202-8c80-73080c77388c\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.797092 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-certs\") pod \"287a38fa-f643-4202-8c80-73080c77388c\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.797129 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-scripts\") pod \"287a38fa-f643-4202-8c80-73080c77388c\" (UID: \"287a38fa-f643-4202-8c80-73080c77388c\") " Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.806417 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-kube-api-access-v8x45" (OuterVolumeSpecName: "kube-api-access-v8x45") pod "287a38fa-f643-4202-8c80-73080c77388c" (UID: "287a38fa-f643-4202-8c80-73080c77388c"). InnerVolumeSpecName "kube-api-access-v8x45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.808675 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-certs" (OuterVolumeSpecName: "certs") pod "287a38fa-f643-4202-8c80-73080c77388c" (UID: "287a38fa-f643-4202-8c80-73080c77388c"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.816181 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-scripts" (OuterVolumeSpecName: "scripts") pod "287a38fa-f643-4202-8c80-73080c77388c" (UID: "287a38fa-f643-4202-8c80-73080c77388c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.850214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "287a38fa-f643-4202-8c80-73080c77388c" (UID: "287a38fa-f643-4202-8c80-73080c77388c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.865099 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-config-data" (OuterVolumeSpecName: "config-data") pod "287a38fa-f643-4202-8c80-73080c77388c" (UID: "287a38fa-f643-4202-8c80-73080c77388c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.907658 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.907715 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.907724 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.907733 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287a38fa-f643-4202-8c80-73080c77388c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:21 crc kubenswrapper[4825]: I0122 15:45:21.907743 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8x45\" (UniqueName: \"kubernetes.io/projected/287a38fa-f643-4202-8c80-73080c77388c-kube-api-access-v8x45\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.185048 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-98v87" event={"ID":"287a38fa-f643-4202-8c80-73080c77388c","Type":"ContainerDied","Data":"b698e877bd7e982315cb816cc09cd4d60161d2d95f1b9a4e753135d58151b66a"} Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.185108 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b698e877bd7e982315cb816cc09cd4d60161d2d95f1b9a4e753135d58151b66a" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.185192 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-98v87" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.473390 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:22 crc kubenswrapper[4825]: E0122 15:45:22.474057 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287a38fa-f643-4202-8c80-73080c77388c" containerName="cloudkitty-storageinit" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.474071 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="287a38fa-f643-4202-8c80-73080c77388c" containerName="cloudkitty-storageinit" Jan 22 15:45:22 crc kubenswrapper[4825]: E0122 15:45:22.474098 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" containerName="init" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.474104 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" containerName="init" Jan 22 15:45:22 crc kubenswrapper[4825]: E0122 15:45:22.474121 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" containerName="dnsmasq-dns" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.474128 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" containerName="dnsmasq-dns" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.474303 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" containerName="dnsmasq-dns" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.474324 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="287a38fa-f643-4202-8c80-73080c77388c" containerName="cloudkitty-storageinit" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.475084 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.483549 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.511377 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.511420 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.511376 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.523445 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-nn5s9" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.525065 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.525146 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntgc7\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-kube-api-access-ntgc7\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.525181 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-scripts\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.525301 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-certs\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.525322 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.525383 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.561068 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.635823 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-hkzt2"] Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.640901 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.641097 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntgc7\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-kube-api-access-ntgc7\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.641139 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-scripts\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.641301 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.641326 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-certs\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.641442 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.657141 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.658301 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.659343 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.663683 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.666415 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-certs\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.671507 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntgc7\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-kube-api-access-ntgc7\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.672650 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-scripts\") pod \"cloudkitty-proc-0\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.676092 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-hkzt2"] Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.755100 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-svc\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.755141 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.755197 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-config\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.755247 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpn5\" (UniqueName: \"kubernetes.io/projected/91f09962-57c2-42b0-9077-05b26c5899b3-kube-api-access-kfpn5\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.755335 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.755397 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.828759 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.862253 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-svc\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.862313 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.862636 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-config\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.863281 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpn5\" (UniqueName: \"kubernetes.io/projected/91f09962-57c2-42b0-9077-05b26c5899b3-kube-api-access-kfpn5\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.863688 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-config\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.865619 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.865797 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.867258 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.867612 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-svc\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.906857 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.907732 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpn5\" (UniqueName: \"kubernetes.io/projected/91f09962-57c2-42b0-9077-05b26c5899b3-kube-api-access-kfpn5\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.908092 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-hkzt2\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.912063 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.939589 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.939737 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.952242 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.975193 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.983622 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.984527 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-scripts\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.984732 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghxg\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-kube-api-access-zghxg\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.985078 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.985308 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c331c96c-8079-4df4-bb2d-5cc54412ee99-logs\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:22 crc kubenswrapper[4825]: I0122 15:45:22.985951 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-certs\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090318 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-certs\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090693 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090732 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090774 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-scripts\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090796 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghxg\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-kube-api-access-zghxg\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090839 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.090867 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c331c96c-8079-4df4-bb2d-5cc54412ee99-logs\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.091262 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c331c96c-8079-4df4-bb2d-5cc54412ee99-logs\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.094777 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-certs\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.094905 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.100003 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-scripts\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.101677 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.103464 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.105853 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.196284 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghxg\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-kube-api-access-zghxg\") pod \"cloudkitty-api-0\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.326500 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.394105 4825 generic.go:334] "Generic (PLEG): container finished" podID="20602869-cdc8-49cb-82ae-36d1c720f637" containerID="25bad4877bc71685f3451e6177320f68bced3d83a63707e09a9b6f959cd1a06b" exitCode=0 Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.394156 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20602869-cdc8-49cb-82ae-36d1c720f637","Type":"ContainerDied","Data":"25bad4877bc71685f3451e6177320f68bced3d83a63707e09a9b6f959cd1a06b"} Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.569968 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3440c5ee-21a0-480d-8960-0d60146517cb" path="/var/lib/kubelet/pods/3440c5ee-21a0-480d-8960-0d60146517cb/volumes" Jan 22 15:45:23 crc kubenswrapper[4825]: I0122 15:45:23.806432 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:23.992804 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.078057 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.097327 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-hkzt2"] Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.416014 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.429212 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"c331c96c-8079-4df4-bb2d-5cc54412ee99","Type":"ContainerStarted","Data":"6838a4dd91354525896ddba52b0046863861f4c77260d33ab9ae02b294af8a9c"} Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.451003 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"21dfb856-a389-40e5-b6f4-d25ef0029531","Type":"ContainerStarted","Data":"44ba9e7ddfba0efcf8fad7548b2d10c29d22c898bbe1094d75f5e2a3a7976965"} Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.474278 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" event={"ID":"91f09962-57c2-42b0-9077-05b26c5899b3","Type":"ContainerStarted","Data":"fb756ac4a08a79f0945fd2539ededf217dc3cde36c3d6a58518acc1ead70857e"} Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.541255 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerStarted","Data":"6e3c6b624a7630411f15a3d140775a16069c3d682864dab5325a97531902e643"} Jan 22 15:45:24 crc kubenswrapper[4825]: I0122 15:45:24.743761 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-679b6799cd-9xsrq" Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.562472 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerStarted","Data":"e1e61c03daf77b8dced0f75a17533c7dac6df6a0c8ab289be8c48251af935857"} Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.569630 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"c331c96c-8079-4df4-bb2d-5cc54412ee99","Type":"ContainerStarted","Data":"ceaa09e3eaaf33552096f70c4ea0abcb7b2de53fa2923c7429719fb72085a840"} Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.569685 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"c331c96c-8079-4df4-bb2d-5cc54412ee99","Type":"ContainerStarted","Data":"8628a622b0e2ea4e1c8bc702ba80be3bf8d18b3a7fda3bd039080de988a33815"} Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.569907 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.590399 4825 generic.go:334] "Generic (PLEG): container finished" podID="20602869-cdc8-49cb-82ae-36d1c720f637" containerID="754a8e91784326cb73134050a4853ab3fa9c517cc0b1683100ee1198428c939c" exitCode=0 Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.590544 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20602869-cdc8-49cb-82ae-36d1c720f637","Type":"ContainerDied","Data":"754a8e91784326cb73134050a4853ab3fa9c517cc0b1683100ee1198428c939c"} Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.620282 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.620247138 podStartE2EDuration="3.620247138s" podCreationTimestamp="2026-01-22 15:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:25.590479983 +0000 UTC m=+1272.352006893" watchObservedRunningTime="2026-01-22 15:45:25.620247138 +0000 UTC m=+1272.381774048" Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.641534 4825 generic.go:334] "Generic (PLEG): container finished" podID="91f09962-57c2-42b0-9077-05b26c5899b3" containerID="4e1e7f96f960964209e9e544f3bc9f3d759c931fae34cd9edbd63abc8db7dc81" exitCode=0 Jan 22 15:45:25 crc kubenswrapper[4825]: I0122 15:45:25.642875 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" event={"ID":"91f09962-57c2-42b0-9077-05b26c5899b3","Type":"ContainerDied","Data":"4e1e7f96f960964209e9e544f3bc9f3d759c931fae34cd9edbd63abc8db7dc81"} Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.204813 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.306324 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gcwm\" (UniqueName: \"kubernetes.io/projected/20602869-cdc8-49cb-82ae-36d1c720f637-kube-api-access-4gcwm\") pod \"20602869-cdc8-49cb-82ae-36d1c720f637\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.306504 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-combined-ca-bundle\") pod \"20602869-cdc8-49cb-82ae-36d1c720f637\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.306553 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20602869-cdc8-49cb-82ae-36d1c720f637-etc-machine-id\") pod \"20602869-cdc8-49cb-82ae-36d1c720f637\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.306639 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data\") pod \"20602869-cdc8-49cb-82ae-36d1c720f637\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.306661 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data-custom\") pod \"20602869-cdc8-49cb-82ae-36d1c720f637\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.306759 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-scripts\") pod \"20602869-cdc8-49cb-82ae-36d1c720f637\" (UID: \"20602869-cdc8-49cb-82ae-36d1c720f637\") " Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.309617 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20602869-cdc8-49cb-82ae-36d1c720f637-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "20602869-cdc8-49cb-82ae-36d1c720f637" (UID: "20602869-cdc8-49cb-82ae-36d1c720f637"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.322003 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-scripts" (OuterVolumeSpecName: "scripts") pod "20602869-cdc8-49cb-82ae-36d1c720f637" (UID: "20602869-cdc8-49cb-82ae-36d1c720f637"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.324072 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20602869-cdc8-49cb-82ae-36d1c720f637-kube-api-access-4gcwm" (OuterVolumeSpecName: "kube-api-access-4gcwm") pod "20602869-cdc8-49cb-82ae-36d1c720f637" (UID: "20602869-cdc8-49cb-82ae-36d1c720f637"). InnerVolumeSpecName "kube-api-access-4gcwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.329255 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20602869-cdc8-49cb-82ae-36d1c720f637" (UID: "20602869-cdc8-49cb-82ae-36d1c720f637"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.363482 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.405557 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20602869-cdc8-49cb-82ae-36d1c720f637" (UID: "20602869-cdc8-49cb-82ae-36d1c720f637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.412638 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.412690 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gcwm\" (UniqueName: \"kubernetes.io/projected/20602869-cdc8-49cb-82ae-36d1c720f637-kube-api-access-4gcwm\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.412706 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.412716 4825 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20602869-cdc8-49cb-82ae-36d1c720f637-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.412725 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.568941 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.635275 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data" (OuterVolumeSpecName: "config-data") pod "20602869-cdc8-49cb-82ae-36d1c720f637" (UID: "20602869-cdc8-49cb-82ae-36d1c720f637"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.722305 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20602869-cdc8-49cb-82ae-36d1c720f637-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.765367 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20602869-cdc8-49cb-82ae-36d1c720f637","Type":"ContainerDied","Data":"4f3a93dce17bd1d4c4e28debffb82957d141af2a90e3ec649b3c0e8a15f2d038"} Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.765387 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.765421 4825 scope.go:117] "RemoveContainer" containerID="25bad4877bc71685f3451e6177320f68bced3d83a63707e09a9b6f959cd1a06b" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.768783 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" event={"ID":"91f09962-57c2-42b0-9077-05b26c5899b3","Type":"ContainerStarted","Data":"533415d684514fcc5cb6e9b617c61baf7a74ce45888b4e89ca677e52b85e84cc"} Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.789188 4825 scope.go:117] "RemoveContainer" containerID="754a8e91784326cb73134050a4853ab3fa9c517cc0b1683100ee1198428c939c" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.827380 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.848197 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.872097 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:26 crc kubenswrapper[4825]: E0122 15:45:26.872631 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="cinder-scheduler" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.872651 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="cinder-scheduler" Jan 22 15:45:26 crc kubenswrapper[4825]: E0122 15:45:26.872680 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="probe" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.872686 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="probe" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.872915 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="probe" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.872945 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" containerName="cinder-scheduler" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.874276 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.883603 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.885954 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 15:45:26 crc kubenswrapper[4825]: I0122 15:45:26.938616 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-647867566-jbd62" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.032506 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-scripts\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.032656 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-config-data\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.032710 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.032750 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjf6n\" (UniqueName: \"kubernetes.io/projected/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-kube-api-access-sjf6n\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.032811 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.032858 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.204399 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-scripts\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.204483 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-config-data\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.204518 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.204560 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.204603 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.204729 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.221464 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-657ccd9fc8-rsx5r"] Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.222027 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-657ccd9fc8-rsx5r" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api" containerID="cri-o://b163a390fb9a5d881230c6615588e783f43c1aad4e1c098c6534681b5024fc4f" gracePeriod=30 Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.222217 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-657ccd9fc8-rsx5r" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api-log" containerID="cri-o://74a9c1526b354517aa97d6b5e15e8e0cbf6d23aa8dfbc2cebff9bc0d4114c925" gracePeriod=30 Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.224709 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-scripts\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.275090 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.276993 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-config-data\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.280401 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.307569 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjf6n\" (UniqueName: \"kubernetes.io/projected/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-kube-api-access-sjf6n\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.343275 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjf6n\" (UniqueName: \"kubernetes.io/projected/efc4890e-42b2-4bc7-98fa-40e22ecc24ad-kube-api-access-sjf6n\") pod \"cinder-scheduler-0\" (UID: \"efc4890e-42b2-4bc7-98fa-40e22ecc24ad\") " pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.502341 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.548533 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20602869-cdc8-49cb-82ae-36d1c720f637" path="/var/lib/kubelet/pods/20602869-cdc8-49cb-82ae-36d1c720f637/volumes" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.778023 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bfd68784d-7vgv2" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.789119 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerStarted","Data":"7d140d849a8b420eb998a2da3d122a53380cd0d9b5e8049333f0932e4c9cc08c"} Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.790129 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.792120 4825 generic.go:334] "Generic (PLEG): container finished" podID="6952fded-9cdf-4220-9f73-ff832415b100" containerID="74a9c1526b354517aa97d6b5e15e8e0cbf6d23aa8dfbc2cebff9bc0d4114c925" exitCode=143 Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.792282 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657ccd9fc8-rsx5r" event={"ID":"6952fded-9cdf-4220-9f73-ff832415b100","Type":"ContainerDied","Data":"74a9c1526b354517aa97d6b5e15e8e0cbf6d23aa8dfbc2cebff9bc0d4114c925"} Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.793132 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api-log" containerID="cri-o://8628a622b0e2ea4e1c8bc702ba80be3bf8d18b3a7fda3bd039080de988a33815" gracePeriod=30 Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.793269 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.793326 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api" containerID="cri-o://ceaa09e3eaaf33552096f70c4ea0abcb7b2de53fa2923c7429719fb72085a840" gracePeriod=30 Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.856957 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" podStartSLOduration=5.856937443 podStartE2EDuration="5.856937443s" podCreationTimestamp="2026-01-22 15:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:27.826140759 +0000 UTC m=+1274.587667669" watchObservedRunningTime="2026-01-22 15:45:27.856937443 +0000 UTC m=+1274.618464353" Jan 22 15:45:27 crc kubenswrapper[4825]: I0122 15:45:27.875412 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.093601319 podStartE2EDuration="9.875390873s" podCreationTimestamp="2026-01-22 15:45:18 +0000 UTC" firstStartedPulling="2026-01-22 15:45:19.509700299 +0000 UTC m=+1266.271227209" lastFinishedPulling="2026-01-22 15:45:26.291489853 +0000 UTC m=+1273.053016763" observedRunningTime="2026-01-22 15:45:27.856590003 +0000 UTC m=+1274.618116913" watchObservedRunningTime="2026-01-22 15:45:27.875390873 +0000 UTC m=+1274.636917783" Jan 22 15:45:28 crc kubenswrapper[4825]: I0122 15:45:28.815637 4825 generic.go:334] "Generic (PLEG): container finished" podID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerID="ceaa09e3eaaf33552096f70c4ea0abcb7b2de53fa2923c7429719fb72085a840" exitCode=0 Jan 22 15:45:28 crc kubenswrapper[4825]: I0122 15:45:28.816128 4825 generic.go:334] "Generic (PLEG): container finished" podID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerID="8628a622b0e2ea4e1c8bc702ba80be3bf8d18b3a7fda3bd039080de988a33815" exitCode=143 Jan 22 15:45:28 crc kubenswrapper[4825]: I0122 15:45:28.815727 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"c331c96c-8079-4df4-bb2d-5cc54412ee99","Type":"ContainerDied","Data":"ceaa09e3eaaf33552096f70c4ea0abcb7b2de53fa2923c7429719fb72085a840"} Jan 22 15:45:28 crc kubenswrapper[4825]: I0122 15:45:28.816198 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"c331c96c-8079-4df4-bb2d-5cc54412ee99","Type":"ContainerDied","Data":"8628a622b0e2ea4e1c8bc702ba80be3bf8d18b3a7fda3bd039080de988a33815"} Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.370270 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.532876 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.533716 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-combined-ca-bundle\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.533766 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghxg\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-kube-api-access-zghxg\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.533867 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.533902 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-certs\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.533967 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-scripts\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: E0122 15:45:29.534008 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.534026 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api" Jan 22 15:45:29 crc kubenswrapper[4825]: E0122 15:45:29.534064 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api-log" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.534077 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api-log" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.534155 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c331c96c-8079-4df4-bb2d-5cc54412ee99-logs\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.534207 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data-custom\") pod \"c331c96c-8079-4df4-bb2d-5cc54412ee99\" (UID: \"c331c96c-8079-4df4-bb2d-5cc54412ee99\") " Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.534338 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.534361 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" containerName="cloudkitty-api-log" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.535348 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c331c96c-8079-4df4-bb2d-5cc54412ee99-logs" (OuterVolumeSpecName: "logs") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.536381 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.536495 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.541432 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.541558 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-kube-api-access-zghxg" (OuterVolumeSpecName: "kube-api-access-zghxg") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "kube-api-access-zghxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.541815 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-certs" (OuterVolumeSpecName: "certs") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.546199 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.547788 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pmfzb" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.548141 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-scripts" (OuterVolumeSpecName: "scripts") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.550238 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.573273 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data" (OuterVolumeSpecName: "config-data") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.591056 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c331c96c-8079-4df4-bb2d-5cc54412ee99" (UID: "c331c96c-8079-4df4-bb2d-5cc54412ee99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.629614 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.636693 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4709fedd-37c2-4afa-b34d-347f46586c55-openstack-config-secret\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.636805 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4709fedd-37c2-4afa-b34d-347f46586c55-openstack-config\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.636942 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9cpj\" (UniqueName: \"kubernetes.io/projected/4709fedd-37c2-4afa-b34d-347f46586c55-kube-api-access-t9cpj\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637015 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709fedd-37c2-4afa-b34d-347f46586c55-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637135 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637153 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c331c96c-8079-4df4-bb2d-5cc54412ee99-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637164 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637184 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637193 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghxg\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-kube-api-access-zghxg\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637202 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c331c96c-8079-4df4-bb2d-5cc54412ee99-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.637210 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c331c96c-8079-4df4-bb2d-5cc54412ee99-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.739102 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9cpj\" (UniqueName: \"kubernetes.io/projected/4709fedd-37c2-4afa-b34d-347f46586c55-kube-api-access-t9cpj\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.739170 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709fedd-37c2-4afa-b34d-347f46586c55-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.739254 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4709fedd-37c2-4afa-b34d-347f46586c55-openstack-config-secret\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.739311 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4709fedd-37c2-4afa-b34d-347f46586c55-openstack-config\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.740133 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4709fedd-37c2-4afa-b34d-347f46586c55-openstack-config\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.748962 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709fedd-37c2-4afa-b34d-347f46586c55-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.756438 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9cpj\" (UniqueName: \"kubernetes.io/projected/4709fedd-37c2-4afa-b34d-347f46586c55-kube-api-access-t9cpj\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.758090 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4709fedd-37c2-4afa-b34d-347f46586c55-openstack-config-secret\") pod \"openstackclient\" (UID: \"4709fedd-37c2-4afa-b34d-347f46586c55\") " pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.833330 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efc4890e-42b2-4bc7-98fa-40e22ecc24ad","Type":"ContainerStarted","Data":"6c2481d49bbf787a74b25cf23ffab659d91c164c50702adf1ea9e42f019b191a"} Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.837740 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"c331c96c-8079-4df4-bb2d-5cc54412ee99","Type":"ContainerDied","Data":"6838a4dd91354525896ddba52b0046863861f4c77260d33ab9ae02b294af8a9c"} Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.837806 4825 scope.go:117] "RemoveContainer" containerID="ceaa09e3eaaf33552096f70c4ea0abcb7b2de53fa2923c7429719fb72085a840" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.837949 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.846206 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"21dfb856-a389-40e5-b6f4-d25ef0029531","Type":"ContainerStarted","Data":"5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6"} Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.877733 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.8016512430000002 podStartE2EDuration="7.877706211s" podCreationTimestamp="2026-01-22 15:45:22 +0000 UTC" firstStartedPulling="2026-01-22 15:45:23.803084174 +0000 UTC m=+1270.564611084" lastFinishedPulling="2026-01-22 15:45:28.879139142 +0000 UTC m=+1275.640666052" observedRunningTime="2026-01-22 15:45:29.870264577 +0000 UTC m=+1276.631791487" watchObservedRunningTime="2026-01-22 15:45:29.877706211 +0000 UTC m=+1276.639233131" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.882783 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.919735 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.934589 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.959426 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.983020 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.985452 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.993160 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.998192 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.998232 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 22 15:45:29 crc kubenswrapper[4825]: I0122 15:45:29.998234 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.007107 4825 scope.go:117] "RemoveContainer" containerID="8628a622b0e2ea4e1c8bc702ba80be3bf8d18b3a7fda3bd039080de988a33815" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.154664 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-logs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.154734 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.154781 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.154811 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbv4q\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-kube-api-access-jbv4q\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.157633 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.158720 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.159082 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.159167 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.159342 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-scripts\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261661 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261704 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261736 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261800 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-scripts\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261847 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-logs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261884 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261918 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261947 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbv4q\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-kube-api-access-jbv4q\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.261969 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.267998 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.271447 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-logs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.272082 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.273949 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.274936 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.276451 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-scripts\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.279478 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.279967 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbv4q\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-kube-api-access-jbv4q\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.282517 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.310147 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.532600 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.191:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.580901 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.731023 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-657ccd9fc8-rsx5r" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.188:9311/healthcheck\": read tcp 10.217.0.2:49520->10.217.0.188:9311: read: connection reset by peer" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.731359 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-657ccd9fc8-rsx5r" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.188:9311/healthcheck\": read tcp 10.217.0.2:49512->10.217.0.188:9311: read: connection reset by peer" Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.956205 4825 generic.go:334] "Generic (PLEG): container finished" podID="6952fded-9cdf-4220-9f73-ff832415b100" containerID="b163a390fb9a5d881230c6615588e783f43c1aad4e1c098c6534681b5024fc4f" exitCode=0 Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.956585 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657ccd9fc8-rsx5r" event={"ID":"6952fded-9cdf-4220-9f73-ff832415b100","Type":"ContainerDied","Data":"b163a390fb9a5d881230c6615588e783f43c1aad4e1c098c6534681b5024fc4f"} Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.975733 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efc4890e-42b2-4bc7-98fa-40e22ecc24ad","Type":"ContainerStarted","Data":"d861b3e5f451523b76b08e3257ea401436402b75d071e3121db42b780ebd8a87"} Jan 22 15:45:30 crc kubenswrapper[4825]: I0122 15:45:30.979748 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4709fedd-37c2-4afa-b34d-347f46586c55","Type":"ContainerStarted","Data":"8b2c9b468e39e015a203b5286a1951a653e48b171d8eade40d7b46fa29617c0c"} Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.030481 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.552147 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c331c96c-8079-4df4-bb2d-5cc54412ee99" path="/var/lib/kubelet/pods/c331c96c-8079-4df4-bb2d-5cc54412ee99/volumes" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.569733 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fdbbbd487-qbcwc" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.581759 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.679330 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6845d75bcd-cxzv6"] Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.679948 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6845d75bcd-cxzv6" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-api" containerID="cri-o://89747f21a9dcbcda733ed4794c1a788daa54e6e4dea6db06fed567992fcc5d69" gracePeriod=30 Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.680331 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6845d75bcd-cxzv6" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-httpd" containerID="cri-o://da875616181706ec7e9ab0aed3b861b38d1642bddba76527b4b01b520f7f8448" gracePeriod=30 Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.736967 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6952fded-9cdf-4220-9f73-ff832415b100-logs\") pod \"6952fded-9cdf-4220-9f73-ff832415b100\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.737271 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-combined-ca-bundle\") pod \"6952fded-9cdf-4220-9f73-ff832415b100\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.737367 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxs2p\" (UniqueName: \"kubernetes.io/projected/6952fded-9cdf-4220-9f73-ff832415b100-kube-api-access-bxs2p\") pod \"6952fded-9cdf-4220-9f73-ff832415b100\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.737482 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data-custom\") pod \"6952fded-9cdf-4220-9f73-ff832415b100\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.737692 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6952fded-9cdf-4220-9f73-ff832415b100-logs" (OuterVolumeSpecName: "logs") pod "6952fded-9cdf-4220-9f73-ff832415b100" (UID: "6952fded-9cdf-4220-9f73-ff832415b100"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.737757 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data\") pod \"6952fded-9cdf-4220-9f73-ff832415b100\" (UID: \"6952fded-9cdf-4220-9f73-ff832415b100\") " Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.739005 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6952fded-9cdf-4220-9f73-ff832415b100-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.744122 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6952fded-9cdf-4220-9f73-ff832415b100" (UID: "6952fded-9cdf-4220-9f73-ff832415b100"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.744278 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6952fded-9cdf-4220-9f73-ff832415b100-kube-api-access-bxs2p" (OuterVolumeSpecName: "kube-api-access-bxs2p") pod "6952fded-9cdf-4220-9f73-ff832415b100" (UID: "6952fded-9cdf-4220-9f73-ff832415b100"). InnerVolumeSpecName "kube-api-access-bxs2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.803969 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6952fded-9cdf-4220-9f73-ff832415b100" (UID: "6952fded-9cdf-4220-9f73-ff832415b100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.851878 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.851924 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxs2p\" (UniqueName: \"kubernetes.io/projected/6952fded-9cdf-4220-9f73-ff832415b100-kube-api-access-bxs2p\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:31 crc kubenswrapper[4825]: I0122 15:45:31.851941 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.003126 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data" (OuterVolumeSpecName: "config-data") pod "6952fded-9cdf-4220-9f73-ff832415b100" (UID: "6952fded-9cdf-4220-9f73-ff832415b100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.083554 4825 generic.go:334] "Generic (PLEG): container finished" podID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerID="da875616181706ec7e9ab0aed3b861b38d1642bddba76527b4b01b520f7f8448" exitCode=0 Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.153756 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845d75bcd-cxzv6" event={"ID":"eacf9923-7898-4237-a615-e2c8de47d3cb","Type":"ContainerDied","Data":"da875616181706ec7e9ab0aed3b861b38d1642bddba76527b4b01b520f7f8448"} Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.181769 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952fded-9cdf-4220-9f73-ff832415b100-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.244291 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa","Type":"ContainerStarted","Data":"0e7537ea16924c190c2747c11144a108589391b9e8c36671010dae31af419e92"} Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.244345 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa","Type":"ContainerStarted","Data":"33612e08d8ba56ec8dc58ad30b63ea57b5a19b5e6eab6a26bbe4bdd459df5678"} Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.245005 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.281196 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.281172713 podStartE2EDuration="3.281172713s" podCreationTimestamp="2026-01-22 15:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:32.270280691 +0000 UTC m=+1279.031807601" watchObservedRunningTime="2026-01-22 15:45:32.281172713 +0000 UTC m=+1279.042699623" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.307295 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657ccd9fc8-rsx5r" event={"ID":"6952fded-9cdf-4220-9f73-ff832415b100","Type":"ContainerDied","Data":"aefca7933ee4150f9bb7c56f5bfc5cd7f1645913cfb1cb8245e4aac1123fd9f6"} Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.307350 4825 scope.go:117] "RemoveContainer" containerID="b163a390fb9a5d881230c6615588e783f43c1aad4e1c098c6534681b5024fc4f" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.307488 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657ccd9fc8-rsx5r" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.360104 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="21dfb856-a389-40e5-b6f4-d25ef0029531" containerName="cloudkitty-proc" containerID="cri-o://5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6" gracePeriod=30 Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.361102 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efc4890e-42b2-4bc7-98fa-40e22ecc24ad","Type":"ContainerStarted","Data":"772ab1704f1bc8aa6a549cc45fbfa3f5bc8da9850b614ec95579f0fe98ab947b"} Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.391162 4825 scope.go:117] "RemoveContainer" containerID="74a9c1526b354517aa97d6b5e15e8e0cbf6d23aa8dfbc2cebff9bc0d4114c925" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.412108 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.412087181 podStartE2EDuration="6.412087181s" podCreationTimestamp="2026-01-22 15:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:32.40230064 +0000 UTC m=+1279.163827550" watchObservedRunningTime="2026-01-22 15:45:32.412087181 +0000 UTC m=+1279.173614101" Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.442543 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-657ccd9fc8-rsx5r"] Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.457159 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-657ccd9fc8-rsx5r"] Jan 22 15:45:32 crc kubenswrapper[4825]: I0122 15:45:32.503156 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 15:45:33 crc kubenswrapper[4825]: I0122 15:45:33.106970 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:45:33 crc kubenswrapper[4825]: I0122 15:45:33.275734 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-npwwm"] Jan 22 15:45:33 crc kubenswrapper[4825]: I0122 15:45:33.276128 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerName="dnsmasq-dns" containerID="cri-o://78116bdacf77100afbcead00f940e4166d91e825c997bf04466f4e3a30c6913d" gracePeriod=10 Jan 22 15:45:33 crc kubenswrapper[4825]: I0122 15:45:33.400456 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa","Type":"ContainerStarted","Data":"15e44a45d041b0b1e8eaa16c3a8109a4f9761c79112275e5232fd874d578073a"} Jan 22 15:45:33 crc kubenswrapper[4825]: I0122 15:45:33.598087 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6952fded-9cdf-4220-9f73-ff832415b100" path="/var/lib/kubelet/pods/6952fded-9cdf-4220-9f73-ff832415b100/volumes" Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.451524 4825 generic.go:334] "Generic (PLEG): container finished" podID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerID="78116bdacf77100afbcead00f940e4166d91e825c997bf04466f4e3a30c6913d" exitCode=0 Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.451690 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" event={"ID":"0e71e054-4364-4dc1-9eee-8ff7f6cac148","Type":"ContainerDied","Data":"78116bdacf77100afbcead00f940e4166d91e825c997bf04466f4e3a30c6913d"} Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.861525 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.994898 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-config\") pod \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.995063 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-nb\") pod \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.995120 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-swift-storage-0\") pod \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.995152 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-sb\") pod \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.995168 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tscdj\" (UniqueName: \"kubernetes.io/projected/0e71e054-4364-4dc1-9eee-8ff7f6cac148-kube-api-access-tscdj\") pod \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " Jan 22 15:45:34 crc kubenswrapper[4825]: I0122 15:45:34.995276 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-svc\") pod \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\" (UID: \"0e71e054-4364-4dc1-9eee-8ff7f6cac148\") " Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.025073 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e71e054-4364-4dc1-9eee-8ff7f6cac148-kube-api-access-tscdj" (OuterVolumeSpecName: "kube-api-access-tscdj") pod "0e71e054-4364-4dc1-9eee-8ff7f6cac148" (UID: "0e71e054-4364-4dc1-9eee-8ff7f6cac148"). InnerVolumeSpecName "kube-api-access-tscdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.060616 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e71e054-4364-4dc1-9eee-8ff7f6cac148" (UID: "0e71e054-4364-4dc1-9eee-8ff7f6cac148"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.081383 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e71e054-4364-4dc1-9eee-8ff7f6cac148" (UID: "0e71e054-4364-4dc1-9eee-8ff7f6cac148"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.091049 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e71e054-4364-4dc1-9eee-8ff7f6cac148" (UID: "0e71e054-4364-4dc1-9eee-8ff7f6cac148"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.102004 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.102039 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.102048 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tscdj\" (UniqueName: \"kubernetes.io/projected/0e71e054-4364-4dc1-9eee-8ff7f6cac148-kube-api-access-tscdj\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.102058 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.103905 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-config" (OuterVolumeSpecName: "config") pod "0e71e054-4364-4dc1-9eee-8ff7f6cac148" (UID: "0e71e054-4364-4dc1-9eee-8ff7f6cac148"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.116340 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0e71e054-4364-4dc1-9eee-8ff7f6cac148" (UID: "0e71e054-4364-4dc1-9eee-8ff7f6cac148"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.205359 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.205402 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e71e054-4364-4dc1-9eee-8ff7f6cac148-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.470908 4825 generic.go:334] "Generic (PLEG): container finished" podID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerID="89747f21a9dcbcda733ed4794c1a788daa54e6e4dea6db06fed567992fcc5d69" exitCode=0 Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.470975 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845d75bcd-cxzv6" event={"ID":"eacf9923-7898-4237-a615-e2c8de47d3cb","Type":"ContainerDied","Data":"89747f21a9dcbcda733ed4794c1a788daa54e6e4dea6db06fed567992fcc5d69"} Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.479208 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" event={"ID":"0e71e054-4364-4dc1-9eee-8ff7f6cac148","Type":"ContainerDied","Data":"0b558b79cd79b64731f7dcfb7430f27e21ff83650058c2c69101204b3789380b"} Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.479471 4825 scope.go:117] "RemoveContainer" containerID="78116bdacf77100afbcead00f940e4166d91e825c997bf04466f4e3a30c6913d" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.479680 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-npwwm" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.544678 4825 scope.go:117] "RemoveContainer" containerID="13d927fe6ab522fa2af52dac81996b832c30af3f91e15fc62efc244d36a20eb2" Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.558077 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-npwwm"] Jan 22 15:45:35 crc kubenswrapper[4825]: I0122 15:45:35.583893 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-npwwm"] Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.017115 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.182727 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhplj\" (UniqueName: \"kubernetes.io/projected/eacf9923-7898-4237-a615-e2c8de47d3cb-kube-api-access-vhplj\") pod \"eacf9923-7898-4237-a615-e2c8de47d3cb\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.182819 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-ovndb-tls-certs\") pod \"eacf9923-7898-4237-a615-e2c8de47d3cb\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.182869 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-config\") pod \"eacf9923-7898-4237-a615-e2c8de47d3cb\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.182968 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-combined-ca-bundle\") pod \"eacf9923-7898-4237-a615-e2c8de47d3cb\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.183046 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-httpd-config\") pod \"eacf9923-7898-4237-a615-e2c8de47d3cb\" (UID: \"eacf9923-7898-4237-a615-e2c8de47d3cb\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.191473 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "eacf9923-7898-4237-a615-e2c8de47d3cb" (UID: "eacf9923-7898-4237-a615-e2c8de47d3cb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.192082 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eacf9923-7898-4237-a615-e2c8de47d3cb-kube-api-access-vhplj" (OuterVolumeSpecName: "kube-api-access-vhplj") pod "eacf9923-7898-4237-a615-e2c8de47d3cb" (UID: "eacf9923-7898-4237-a615-e2c8de47d3cb"). InnerVolumeSpecName "kube-api-access-vhplj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.285251 4825 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.285283 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhplj\" (UniqueName: \"kubernetes.io/projected/eacf9923-7898-4237-a615-e2c8de47d3cb-kube-api-access-vhplj\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.306066 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-config" (OuterVolumeSpecName: "config") pod "eacf9923-7898-4237-a615-e2c8de47d3cb" (UID: "eacf9923-7898-4237-a615-e2c8de47d3cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.307304 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "eacf9923-7898-4237-a615-e2c8de47d3cb" (UID: "eacf9923-7898-4237-a615-e2c8de47d3cb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.320113 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eacf9923-7898-4237-a615-e2c8de47d3cb" (UID: "eacf9923-7898-4237-a615-e2c8de47d3cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.387888 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.388304 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.388431 4825 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacf9923-7898-4237-a615-e2c8de47d3cb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.419890 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.494563 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845d75bcd-cxzv6" event={"ID":"eacf9923-7898-4237-a615-e2c8de47d3cb","Type":"ContainerDied","Data":"fdec62d8abdbe6392684a2bc56b0bcfefb5c50374f06ccf1ad9ccfc9cc33ef8c"} Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.494627 4825 scope.go:117] "RemoveContainer" containerID="da875616181706ec7e9ab0aed3b861b38d1642bddba76527b4b01b520f7f8448" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.494845 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6845d75bcd-cxzv6" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.498753 4825 generic.go:334] "Generic (PLEG): container finished" podID="21dfb856-a389-40e5-b6f4-d25ef0029531" containerID="5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6" exitCode=0 Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.498813 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"21dfb856-a389-40e5-b6f4-d25ef0029531","Type":"ContainerDied","Data":"5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6"} Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.498835 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"21dfb856-a389-40e5-b6f4-d25ef0029531","Type":"ContainerDied","Data":"44ba9e7ddfba0efcf8fad7548b2d10c29d22c898bbe1094d75f5e2a3a7976965"} Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.498883 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.538154 4825 scope.go:117] "RemoveContainer" containerID="89747f21a9dcbcda733ed4794c1a788daa54e6e4dea6db06fed567992fcc5d69" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.540639 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6845d75bcd-cxzv6"] Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.552188 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6845d75bcd-cxzv6"] Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.591831 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntgc7\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-kube-api-access-ntgc7\") pod \"21dfb856-a389-40e5-b6f4-d25ef0029531\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.591884 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-combined-ca-bundle\") pod \"21dfb856-a389-40e5-b6f4-d25ef0029531\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.592024 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-certs\") pod \"21dfb856-a389-40e5-b6f4-d25ef0029531\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.592059 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-scripts\") pod \"21dfb856-a389-40e5-b6f4-d25ef0029531\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.592167 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data\") pod \"21dfb856-a389-40e5-b6f4-d25ef0029531\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.592194 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data-custom\") pod \"21dfb856-a389-40e5-b6f4-d25ef0029531\" (UID: \"21dfb856-a389-40e5-b6f4-d25ef0029531\") " Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.595311 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-certs" (OuterVolumeSpecName: "certs") pod "21dfb856-a389-40e5-b6f4-d25ef0029531" (UID: "21dfb856-a389-40e5-b6f4-d25ef0029531"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.597137 4825 scope.go:117] "RemoveContainer" containerID="5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.599451 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-kube-api-access-ntgc7" (OuterVolumeSpecName: "kube-api-access-ntgc7") pod "21dfb856-a389-40e5-b6f4-d25ef0029531" (UID: "21dfb856-a389-40e5-b6f4-d25ef0029531"). InnerVolumeSpecName "kube-api-access-ntgc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.599532 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "21dfb856-a389-40e5-b6f4-d25ef0029531" (UID: "21dfb856-a389-40e5-b6f4-d25ef0029531"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.607703 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-scripts" (OuterVolumeSpecName: "scripts") pod "21dfb856-a389-40e5-b6f4-d25ef0029531" (UID: "21dfb856-a389-40e5-b6f4-d25ef0029531"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.634725 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21dfb856-a389-40e5-b6f4-d25ef0029531" (UID: "21dfb856-a389-40e5-b6f4-d25ef0029531"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.644520 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data" (OuterVolumeSpecName: "config-data") pod "21dfb856-a389-40e5-b6f4-d25ef0029531" (UID: "21dfb856-a389-40e5-b6f4-d25ef0029531"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.695444 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntgc7\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-kube-api-access-ntgc7\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.695762 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.695790 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/21dfb856-a389-40e5-b6f4-d25ef0029531-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.695800 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.695810 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.695821 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfb856-a389-40e5-b6f4-d25ef0029531-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.762439 4825 scope.go:117] "RemoveContainer" containerID="5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.762953 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6\": container with ID starting with 5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6 not found: ID does not exist" containerID="5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.763042 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6"} err="failed to get container status \"5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6\": rpc error: code = NotFound desc = could not find container \"5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6\": container with ID starting with 5225dabd639fef85070e411ac87ad10c769607949c3ea8cc9ef53de94902bae6 not found: ID does not exist" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.879294 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.898874 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914075 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914482 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-api" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914500 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-api" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914518 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerName="init" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914525 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerName="init" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914541 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api-log" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914547 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api-log" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914581 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerName="dnsmasq-dns" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914588 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerName="dnsmasq-dns" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914596 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21dfb856-a389-40e5-b6f4-d25ef0029531" containerName="cloudkitty-proc" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914602 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="21dfb856-a389-40e5-b6f4-d25ef0029531" containerName="cloudkitty-proc" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914609 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-httpd" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914615 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-httpd" Jan 22 15:45:36 crc kubenswrapper[4825]: E0122 15:45:36.914623 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914628 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914823 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-httpd" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914837 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api-log" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914852 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" containerName="dnsmasq-dns" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914862 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6952fded-9cdf-4220-9f73-ff832415b100" containerName="barbican-api" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914872 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" containerName="neutron-api" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.914887 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="21dfb856-a389-40e5-b6f4-d25ef0029531" containerName="cloudkitty-proc" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.915759 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.920606 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 22 15:45:36 crc kubenswrapper[4825]: I0122 15:45:36.928279 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.211544 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.212889 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.213103 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-certs\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.213351 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvcv\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-kube-api-access-hrvcv\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.213396 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.213549 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-scripts\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.315297 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.315372 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-certs\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.315497 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrvcv\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-kube-api-access-hrvcv\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.315524 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.315601 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-scripts\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.315682 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.319695 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.320385 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.324509 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.324687 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-scripts\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.324836 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-certs\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.344317 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrvcv\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-kube-api-access-hrvcv\") pod \"cloudkitty-proc-0\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.539524 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e71e054-4364-4dc1-9eee-8ff7f6cac148" path="/var/lib/kubelet/pods/0e71e054-4364-4dc1-9eee-8ff7f6cac148/volumes" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.540477 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21dfb856-a389-40e5-b6f4-d25ef0029531" path="/var/lib/kubelet/pods/21dfb856-a389-40e5-b6f4-d25ef0029531/volumes" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.541384 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eacf9923-7898-4237-a615-e2c8de47d3cb" path="/var/lib/kubelet/pods/eacf9923-7898-4237-a615-e2c8de47d3cb/volumes" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.555221 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:45:37 crc kubenswrapper[4825]: I0122 15:45:37.880111 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 15:45:38 crc kubenswrapper[4825]: I0122 15:45:38.110463 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:45:38 crc kubenswrapper[4825]: W0122 15:45:38.119195 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf574794_cc05_40fd_8dce_0497c37a9888.slice/crio-15591dd85f7ac1cb34d4cc8eef6b481f8c2118e5a130b9a6f8fa76683eeaa7af WatchSource:0}: Error finding container 15591dd85f7ac1cb34d4cc8eef6b481f8c2118e5a130b9a6f8fa76683eeaa7af: Status 404 returned error can't find the container with id 15591dd85f7ac1cb34d4cc8eef6b481f8c2118e5a130b9a6f8fa76683eeaa7af Jan 22 15:45:38 crc kubenswrapper[4825]: I0122 15:45:38.562329 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"af574794-cc05-40fd-8dce-0497c37a9888","Type":"ContainerStarted","Data":"15591dd85f7ac1cb34d4cc8eef6b481f8c2118e5a130b9a6f8fa76683eeaa7af"} Jan 22 15:45:39 crc kubenswrapper[4825]: I0122 15:45:39.579921 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"af574794-cc05-40fd-8dce-0497c37a9888","Type":"ContainerStarted","Data":"03f6480418f1b85fd326581f6478d8ae49d2bbe95c8fdab8b1888e404fc399a6"} Jan 22 15:45:39 crc kubenswrapper[4825]: I0122 15:45:39.616952 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.616922896 podStartE2EDuration="3.616922896s" podCreationTimestamp="2026-01-22 15:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:39.609936645 +0000 UTC m=+1286.371463555" watchObservedRunningTime="2026-01-22 15:45:39.616922896 +0000 UTC m=+1286.378449806" Jan 22 15:45:43 crc kubenswrapper[4825]: I0122 15:45:43.825823 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-789468d499-5b789"] Jan 22 15:45:43 crc kubenswrapper[4825]: I0122 15:45:43.828505 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:43 crc kubenswrapper[4825]: I0122 15:45:43.831174 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 15:45:43 crc kubenswrapper[4825]: I0122 15:45:43.833345 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 15:45:43 crc kubenswrapper[4825]: I0122 15:45:43.833578 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 15:45:43 crc kubenswrapper[4825]: I0122 15:45:43.888909 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-789468d499-5b789"] Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.027315 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-combined-ca-bundle\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.027403 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch59v\" (UniqueName: \"kubernetes.io/projected/562fb5cd-164c-4308-851d-88b6afd1e3c2-kube-api-access-ch59v\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.027887 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-public-tls-certs\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.027943 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/562fb5cd-164c-4308-851d-88b6afd1e3c2-run-httpd\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.028083 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/562fb5cd-164c-4308-851d-88b6afd1e3c2-log-httpd\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.028155 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/562fb5cd-164c-4308-851d-88b6afd1e3c2-etc-swift\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.028218 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-internal-tls-certs\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.028271 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-config-data\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.130671 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-combined-ca-bundle\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.130761 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch59v\" (UniqueName: \"kubernetes.io/projected/562fb5cd-164c-4308-851d-88b6afd1e3c2-kube-api-access-ch59v\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.130823 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-public-tls-certs\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.130858 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/562fb5cd-164c-4308-851d-88b6afd1e3c2-run-httpd\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.130920 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/562fb5cd-164c-4308-851d-88b6afd1e3c2-log-httpd\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.130958 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/562fb5cd-164c-4308-851d-88b6afd1e3c2-etc-swift\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.131016 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-internal-tls-certs\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.131039 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-config-data\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.131717 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/562fb5cd-164c-4308-851d-88b6afd1e3c2-log-httpd\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.131786 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/562fb5cd-164c-4308-851d-88b6afd1e3c2-run-httpd\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.141002 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-internal-tls-certs\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.141810 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-public-tls-certs\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.142896 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/562fb5cd-164c-4308-851d-88b6afd1e3c2-etc-swift\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.145757 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-combined-ca-bundle\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.148015 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562fb5cd-164c-4308-851d-88b6afd1e3c2-config-data\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:44 crc kubenswrapper[4825]: I0122 15:45:44.497735 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch59v\" (UniqueName: \"kubernetes.io/projected/562fb5cd-164c-4308-851d-88b6afd1e3c2-kube-api-access-ch59v\") pod \"swift-proxy-789468d499-5b789\" (UID: \"562fb5cd-164c-4308-851d-88b6afd1e3c2\") " pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:45 crc kubenswrapper[4825]: I0122 15:45:45.007529 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.333955 4825 generic.go:334] "Generic (PLEG): container finished" podID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerID="2acba36dacbe61b9fa53cf1073c48aac2ec5bb977130cd3e02173c02a493fd2d" exitCode=137 Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.334229 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffacb6a6-bce4-41f5-b611-1b0e80970b36","Type":"ContainerDied","Data":"2acba36dacbe61b9fa53cf1073c48aac2ec5bb977130cd3e02173c02a493fd2d"} Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.633910 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.634234 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-central-agent" containerID="cri-o://0ee22263dd2547b6f844ef11ddafe4c6e0d5d6073c7f67bb3962ab75c0091d71" gracePeriod=30 Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.635024 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="sg-core" containerID="cri-o://e1e61c03daf77b8dced0f75a17533c7dac6df6a0c8ab289be8c48251af935857" gracePeriod=30 Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.635059 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-notification-agent" containerID="cri-o://6e3c6b624a7630411f15a3d140775a16069c3d682864dab5325a97531902e643" gracePeriod=30 Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.635289 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="proxy-httpd" containerID="cri-o://7d140d849a8b420eb998a2da3d122a53380cd0d9b5e8049333f0932e4c9cc08c" gracePeriod=30 Jan 22 15:45:47 crc kubenswrapper[4825]: I0122 15:45:47.742923 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.196:3000/\": read tcp 10.217.0.2:54516->10.217.0.196:3000: read: connection reset by peer" Jan 22 15:45:48 crc kubenswrapper[4825]: I0122 15:45:48.352740 4825 generic.go:334] "Generic (PLEG): container finished" podID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerID="7d140d849a8b420eb998a2da3d122a53380cd0d9b5e8049333f0932e4c9cc08c" exitCode=0 Jan 22 15:45:48 crc kubenswrapper[4825]: I0122 15:45:48.352775 4825 generic.go:334] "Generic (PLEG): container finished" podID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerID="e1e61c03daf77b8dced0f75a17533c7dac6df6a0c8ab289be8c48251af935857" exitCode=2 Jan 22 15:45:48 crc kubenswrapper[4825]: I0122 15:45:48.352797 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerDied","Data":"7d140d849a8b420eb998a2da3d122a53380cd0d9b5e8049333f0932e4c9cc08c"} Jan 22 15:45:48 crc kubenswrapper[4825]: I0122 15:45:48.352823 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerDied","Data":"e1e61c03daf77b8dced0f75a17533c7dac6df6a0c8ab289be8c48251af935857"} Jan 22 15:45:48 crc kubenswrapper[4825]: I0122 15:45:48.946230 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.196:3000/\": dial tcp 10.217.0.196:3000: connect: connection refused" Jan 22 15:45:49 crc kubenswrapper[4825]: I0122 15:45:49.367666 4825 generic.go:334] "Generic (PLEG): container finished" podID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerID="0ee22263dd2547b6f844ef11ddafe4c6e0d5d6073c7f67bb3962ab75c0091d71" exitCode=0 Jan 22 15:45:49 crc kubenswrapper[4825]: I0122 15:45:49.367712 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerDied","Data":"0ee22263dd2547b6f844ef11ddafe4c6e0d5d6073c7f67bb3962ab75c0091d71"} Jan 22 15:45:50 crc kubenswrapper[4825]: I0122 15:45:50.391312 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.191:8776/healthcheck\": dial tcp 10.217.0.191:8776: connect: connection refused" Jan 22 15:45:51 crc kubenswrapper[4825]: I0122 15:45:51.399107 4825 generic.go:334] "Generic (PLEG): container finished" podID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerID="6e3c6b624a7630411f15a3d140775a16069c3d682864dab5325a97531902e643" exitCode=0 Jan 22 15:45:51 crc kubenswrapper[4825]: I0122 15:45:51.399183 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerDied","Data":"6e3c6b624a7630411f15a3d140775a16069c3d682864dab5325a97531902e643"} Jan 22 15:45:53 crc kubenswrapper[4825]: E0122 15:45:53.975863 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 22 15:45:53 crc kubenswrapper[4825]: E0122 15:45:53.976865 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n546hb7h5d9h67fh676h5c7h58ch67fh65fh77h578h587h54fh5b8hc4h595hdbh9ch6bh577hb4h57bh54bh649h548hd4h657hc7h5ffh676h89h68bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9cpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(4709fedd-37c2-4afa-b34d-347f46586c55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:45:53 crc kubenswrapper[4825]: E0122 15:45:53.978211 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="4709fedd-37c2-4afa-b34d-347f46586c55" Jan 22 15:45:54 crc kubenswrapper[4825]: E0122 15:45:54.448415 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="4709fedd-37c2-4afa-b34d-347f46586c55" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.586998 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.600593 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.665745 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-scripts\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.665867 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.665892 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-combined-ca-bundle\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.665932 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-sg-core-conf-yaml\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.665994 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-scripts\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.666056 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89lq8\" (UniqueName: \"kubernetes.io/projected/ffacb6a6-bce4-41f5-b611-1b0e80970b36-kube-api-access-89lq8\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.666128 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffacb6a6-bce4-41f5-b611-1b0e80970b36-etc-machine-id\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.666174 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-log-httpd\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.666201 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffacb6a6-bce4-41f5-b611-1b0e80970b36-logs\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.666224 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-run-httpd\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.666255 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data-custom\") pod \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\" (UID: \"ffacb6a6-bce4-41f5-b611-1b0e80970b36\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.667228 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-combined-ca-bundle\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.667286 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwrfj\" (UniqueName: \"kubernetes.io/projected/f236d595-13a1-4b4d-a37c-9fe0644907c7-kube-api-access-cwrfj\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.667318 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-config-data\") pod \"f236d595-13a1-4b4d-a37c-9fe0644907c7\" (UID: \"f236d595-13a1-4b4d-a37c-9fe0644907c7\") " Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.669446 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffacb6a6-bce4-41f5-b611-1b0e80970b36-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.670055 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.672212 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffacb6a6-bce4-41f5-b611-1b0e80970b36-logs" (OuterVolumeSpecName: "logs") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.670456 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.675633 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-scripts" (OuterVolumeSpecName: "scripts") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.813991 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-scripts" (OuterVolumeSpecName: "scripts") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.817968 4825 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffacb6a6-bce4-41f5-b611-1b0e80970b36-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.818512 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.818522 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffacb6a6-bce4-41f5-b611-1b0e80970b36-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.818532 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f236d595-13a1-4b4d-a37c-9fe0644907c7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.818540 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.818557 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.818899 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f236d595-13a1-4b4d-a37c-9fe0644907c7-kube-api-access-cwrfj" (OuterVolumeSpecName: "kube-api-access-cwrfj") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "kube-api-access-cwrfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.821668 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.830734 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffacb6a6-bce4-41f5-b611-1b0e80970b36-kube-api-access-89lq8" (OuterVolumeSpecName: "kube-api-access-89lq8") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "kube-api-access-89lq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.855658 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.922257 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.922288 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwrfj\" (UniqueName: \"kubernetes.io/projected/f236d595-13a1-4b4d-a37c-9fe0644907c7-kube-api-access-cwrfj\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.922303 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.922313 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89lq8\" (UniqueName: \"kubernetes.io/projected/ffacb6a6-bce4-41f5-b611-1b0e80970b36-kube-api-access-89lq8\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.966649 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:54 crc kubenswrapper[4825]: I0122 15:45:54.971108 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data" (OuterVolumeSpecName: "config-data") pod "ffacb6a6-bce4-41f5-b611-1b0e80970b36" (UID: "ffacb6a6-bce4-41f5-b611-1b0e80970b36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.000155 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-config-data" (OuterVolumeSpecName: "config-data") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.030150 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f236d595-13a1-4b4d-a37c-9fe0644907c7" (UID: "f236d595-13a1-4b4d-a37c-9fe0644907c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.030193 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.030217 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.030227 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffacb6a6-bce4-41f5-b611-1b0e80970b36-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.113455 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-789468d499-5b789"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.133045 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f236d595-13a1-4b4d-a37c-9fe0644907c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.463359 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-789468d499-5b789" event={"ID":"562fb5cd-164c-4308-851d-88b6afd1e3c2","Type":"ContainerStarted","Data":"788bc658c15b3d370e5264347877af43db45b8f408ae3616d6a10fd6055c9377"} Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.463607 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-789468d499-5b789" event={"ID":"562fb5cd-164c-4308-851d-88b6afd1e3c2","Type":"ContainerStarted","Data":"b0c35b5f2cab08ef33dda135a56c1d2e8299befa34f1bfd8fedaf2692858ef6f"} Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.468652 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ffacb6a6-bce4-41f5-b611-1b0e80970b36","Type":"ContainerDied","Data":"04d1eda05306cfd09d660e677a74cea2e72f22459ccb7de050b298448e575b10"} Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.468711 4825 scope.go:117] "RemoveContainer" containerID="2acba36dacbe61b9fa53cf1073c48aac2ec5bb977130cd3e02173c02a493fd2d" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.468735 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.473591 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f236d595-13a1-4b4d-a37c-9fe0644907c7","Type":"ContainerDied","Data":"11df356c0e717b03adb13c346a8ac0742556f755f574fb3a8f1e3b9d7bb60f40"} Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.473652 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.513339 4825 scope.go:117] "RemoveContainer" containerID="7b491e19054d24a1cf37051183a8bcd6a79820ef6957ff4766704ca990bbf6dd" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.552867 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.568275 4825 scope.go:117] "RemoveContainer" containerID="7d140d849a8b420eb998a2da3d122a53380cd0d9b5e8049333f0932e4c9cc08c" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.590360 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.618854 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.637756 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.667871 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: E0122 15:45:55.668800 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-central-agent" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.668853 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-central-agent" Jan 22 15:45:55 crc kubenswrapper[4825]: E0122 15:45:55.668889 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.668903 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api" Jan 22 15:45:55 crc kubenswrapper[4825]: E0122 15:45:55.668924 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="sg-core" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.668940 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="sg-core" Jan 22 15:45:55 crc kubenswrapper[4825]: E0122 15:45:55.668960 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api-log" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.668993 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api-log" Jan 22 15:45:55 crc kubenswrapper[4825]: E0122 15:45:55.669151 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="proxy-httpd" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669172 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="proxy-httpd" Jan 22 15:45:55 crc kubenswrapper[4825]: E0122 15:45:55.669212 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-notification-agent" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669228 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-notification-agent" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669653 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669682 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" containerName="cinder-api-log" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669712 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-notification-agent" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669732 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="proxy-httpd" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669780 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="sg-core" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.669809 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" containerName="ceilometer-central-agent" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.673864 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.681744 4825 scope.go:117] "RemoveContainer" containerID="e1e61c03daf77b8dced0f75a17533c7dac6df6a0c8ab289be8c48251af935857" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.707684 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.730064 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.733425 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.743927 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.751922 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.752009 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.752198 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.752445 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.752681 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.785354 4825 scope.go:117] "RemoveContainer" containerID="6e3c6b624a7630411f15a3d140775a16069c3d682864dab5325a97531902e643" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.819722 4825 scope.go:117] "RemoveContainer" containerID="0ee22263dd2547b6f844ef11ddafe4c6e0d5d6073c7f67bb3962ab75c0091d71" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.852452 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.852503 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.852527 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853632 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-logs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853691 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-config-data\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853726 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-run-httpd\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853745 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-log-httpd\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853764 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-scripts\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853797 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853866 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkwj\" (UniqueName: \"kubernetes.io/projected/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-kube-api-access-gzkwj\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853890 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853913 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.853932 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-config-data\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.854067 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.854091 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-scripts\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.854121 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrtzt\" (UniqueName: \"kubernetes.io/projected/a5422c4e-3ec1-4a83-af7c-b89fb013964c-kube-api-access-jrtzt\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956027 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956083 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-scripts\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956122 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrtzt\" (UniqueName: \"kubernetes.io/projected/a5422c4e-3ec1-4a83-af7c-b89fb013964c-kube-api-access-jrtzt\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956166 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956200 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956223 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956243 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-logs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956265 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-config-data\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956301 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-run-httpd\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956325 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-log-httpd\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956346 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-scripts\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956382 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956444 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkwj\" (UniqueName: \"kubernetes.io/projected/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-kube-api-access-gzkwj\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956476 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956503 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.956526 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-config-data\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.966597 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-logs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.967380 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-run-httpd\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.967701 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-log-httpd\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.967754 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.967850 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-scripts\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.979249 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.982299 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.984563 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.985721 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-config-data\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.987481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-config-data\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.990195 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-scripts\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.990641 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.990912 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.991402 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrtzt\" (UniqueName: \"kubernetes.io/projected/a5422c4e-3ec1-4a83-af7c-b89fb013964c-kube-api-access-jrtzt\") pod \"ceilometer-0\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " pod="openstack/ceilometer-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.992430 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:55 crc kubenswrapper[4825]: I0122 15:45:55.995454 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkwj\" (UniqueName: \"kubernetes.io/projected/c8f0065f-b8fd-4a5d-a098-2db018daf9ee-kube-api-access-gzkwj\") pod \"cinder-api-0\" (UID: \"c8f0065f-b8fd-4a5d-a098-2db018daf9ee\") " pod="openstack/cinder-api-0" Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.075332 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.090086 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.520019 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-789468d499-5b789" event={"ID":"562fb5cd-164c-4308-851d-88b6afd1e3c2","Type":"ContainerStarted","Data":"bcc9828559435af02bfa8e0c8eecfc1d7d043c3bb1515fe367b02a96a74068e4"} Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.523149 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.523600 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.581260 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-789468d499-5b789" podStartSLOduration=13.581226089 podStartE2EDuration="13.581226089s" podCreationTimestamp="2026-01-22 15:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:56.549465755 +0000 UTC m=+1303.310992675" watchObservedRunningTime="2026-01-22 15:45:56.581226089 +0000 UTC m=+1303.342752999" Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.600616 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 15:45:56 crc kubenswrapper[4825]: I0122 15:45:56.659551 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:45:56 crc kubenswrapper[4825]: W0122 15:45:56.769598 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5422c4e_3ec1_4a83_af7c_b89fb013964c.slice/crio-8553efc5da236f7c628ea597faee75a057c7c1418c0fb24ab6acf01cde9e3d15 WatchSource:0}: Error finding container 8553efc5da236f7c628ea597faee75a057c7c1418c0fb24ab6acf01cde9e3d15: Status 404 returned error can't find the container with id 8553efc5da236f7c628ea597faee75a057c7c1418c0fb24ab6acf01cde9e3d15 Jan 22 15:45:57 crc kubenswrapper[4825]: I0122 15:45:57.528836 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f236d595-13a1-4b4d-a37c-9fe0644907c7" path="/var/lib/kubelet/pods/f236d595-13a1-4b4d-a37c-9fe0644907c7/volumes" Jan 22 15:45:57 crc kubenswrapper[4825]: I0122 15:45:57.529926 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffacb6a6-bce4-41f5-b611-1b0e80970b36" path="/var/lib/kubelet/pods/ffacb6a6-bce4-41f5-b611-1b0e80970b36/volumes" Jan 22 15:45:57 crc kubenswrapper[4825]: I0122 15:45:57.538494 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerStarted","Data":"8553efc5da236f7c628ea597faee75a057c7c1418c0fb24ab6acf01cde9e3d15"} Jan 22 15:45:57 crc kubenswrapper[4825]: I0122 15:45:57.539912 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8f0065f-b8fd-4a5d-a098-2db018daf9ee","Type":"ContainerStarted","Data":"8bb8ded476a4a7fb9b706ea5b8aa08d6a0bf4b3726fd731d102c787a5ca64667"} Jan 22 15:45:58 crc kubenswrapper[4825]: I0122 15:45:58.555121 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8f0065f-b8fd-4a5d-a098-2db018daf9ee","Type":"ContainerStarted","Data":"78664c9b16eb54b5be2f03132522131b43f4a6c48e574089368968721912148e"} Jan 22 15:45:58 crc kubenswrapper[4825]: I0122 15:45:58.555726 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8f0065f-b8fd-4a5d-a098-2db018daf9ee","Type":"ContainerStarted","Data":"b9b1da04e0601cc6ce77fc4bd3c113122222d94c09aa014004ef382608a3b976"} Jan 22 15:45:58 crc kubenswrapper[4825]: I0122 15:45:58.557260 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerStarted","Data":"96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263"} Jan 22 15:45:59 crc kubenswrapper[4825]: I0122 15:45:59.569754 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerStarted","Data":"ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e"} Jan 22 15:45:59 crc kubenswrapper[4825]: I0122 15:45:59.570104 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerStarted","Data":"788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a"} Jan 22 15:45:59 crc kubenswrapper[4825]: I0122 15:45:59.570272 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 15:45:59 crc kubenswrapper[4825]: I0122 15:45:59.596392 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.596371434 podStartE2EDuration="4.596371434s" podCreationTimestamp="2026-01-22 15:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:45:59.589828218 +0000 UTC m=+1306.351355138" watchObservedRunningTime="2026-01-22 15:45:59.596371434 +0000 UTC m=+1306.357898344" Jan 22 15:46:00 crc kubenswrapper[4825]: I0122 15:46:00.022242 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:46:00 crc kubenswrapper[4825]: I0122 15:46:00.083797 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-789468d499-5b789" Jan 22 15:46:01 crc kubenswrapper[4825]: I0122 15:46:01.530552 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:01 crc kubenswrapper[4825]: I0122 15:46:01.593233 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerStarted","Data":"48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5"} Jan 22 15:46:01 crc kubenswrapper[4825]: I0122 15:46:01.593396 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:46:01 crc kubenswrapper[4825]: I0122 15:46:01.628711 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.581384707 podStartE2EDuration="6.628683473s" podCreationTimestamp="2026-01-22 15:45:55 +0000 UTC" firstStartedPulling="2026-01-22 15:45:56.774720845 +0000 UTC m=+1303.536247755" lastFinishedPulling="2026-01-22 15:46:00.822019611 +0000 UTC m=+1307.583546521" observedRunningTime="2026-01-22 15:46:01.617261938 +0000 UTC m=+1308.378788848" watchObservedRunningTime="2026-01-22 15:46:01.628683473 +0000 UTC m=+1308.390210383" Jan 22 15:46:02 crc kubenswrapper[4825]: I0122 15:46:02.607682 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-central-agent" containerID="cri-o://96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263" gracePeriod=30 Jan 22 15:46:02 crc kubenswrapper[4825]: I0122 15:46:02.607839 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="proxy-httpd" containerID="cri-o://48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5" gracePeriod=30 Jan 22 15:46:02 crc kubenswrapper[4825]: I0122 15:46:02.607839 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="sg-core" containerID="cri-o://ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e" gracePeriod=30 Jan 22 15:46:02 crc kubenswrapper[4825]: I0122 15:46:02.607867 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-notification-agent" containerID="cri-o://788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a" gracePeriod=30 Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.637683 4825 generic.go:334] "Generic (PLEG): container finished" podID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerID="48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5" exitCode=0 Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.638051 4825 generic.go:334] "Generic (PLEG): container finished" podID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerID="ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e" exitCode=2 Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.638068 4825 generic.go:334] "Generic (PLEG): container finished" podID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerID="788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a" exitCode=0 Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.637751 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerDied","Data":"48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5"} Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.638121 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerDied","Data":"ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e"} Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.638151 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerDied","Data":"788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a"} Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.852698 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.853029 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-log" containerID="cri-o://8c3241f400a70e7b8917c2a01619d54ee026b3065b3d232b50b0afbece8406e4" gracePeriod=30 Jan 22 15:46:03 crc kubenswrapper[4825]: I0122 15:46:03.853161 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-httpd" containerID="cri-o://aca2265505e3d7bf617ccb5f32f2e1e77e1a1fdd3326cfdb46be0e12e5685489" gracePeriod=30 Jan 22 15:46:04 crc kubenswrapper[4825]: I0122 15:46:04.650176 4825 generic.go:334] "Generic (PLEG): container finished" podID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerID="8c3241f400a70e7b8917c2a01619d54ee026b3065b3d232b50b0afbece8406e4" exitCode=143 Jan 22 15:46:04 crc kubenswrapper[4825]: I0122 15:46:04.650265 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5e86f1cc-a4dd-4f8f-b9b3-18806405875a","Type":"ContainerDied","Data":"8c3241f400a70e7b8917c2a01619d54ee026b3065b3d232b50b0afbece8406e4"} Jan 22 15:46:04 crc kubenswrapper[4825]: I0122 15:46:04.937234 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:46:04 crc kubenswrapper[4825]: I0122 15:46:04.937497 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-log" containerID="cri-o://fe96204d9b101fc9c165f595318955f2f332311a76b05f8e0cbd8b217ee36320" gracePeriod=30 Jan 22 15:46:04 crc kubenswrapper[4825]: I0122 15:46:04.937614 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-httpd" containerID="cri-o://ef37cd4c07f479e49fce30a552f1592858bb9c5965d59c1a3c43b769e75c9029" gracePeriod=30 Jan 22 15:46:05 crc kubenswrapper[4825]: I0122 15:46:05.668584 4825 generic.go:334] "Generic (PLEG): container finished" podID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerID="fe96204d9b101fc9c165f595318955f2f332311a76b05f8e0cbd8b217ee36320" exitCode=143 Jan 22 15:46:05 crc kubenswrapper[4825]: I0122 15:46:05.668668 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63da9616-1db2-49a5-8591-9c8bdbbb43a3","Type":"ContainerDied","Data":"fe96204d9b101fc9c165f595318955f2f332311a76b05f8e0cbd8b217ee36320"} Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.010838 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pxhlf"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.012369 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.045110 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pxhlf"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.090300 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-brf2x"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.095798 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.096226 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-operator-scripts\") pod \"nova-api-db-create-pxhlf\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.096265 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb47j\" (UniqueName: \"kubernetes.io/projected/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-kube-api-access-xb47j\") pod \"nova-api-db-create-pxhlf\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.168567 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-brf2x"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.205767 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-operator-scripts\") pod \"nova-api-db-create-pxhlf\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.206632 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb47j\" (UniqueName: \"kubernetes.io/projected/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-kube-api-access-xb47j\") pod \"nova-api-db-create-pxhlf\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.206835 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsqf\" (UniqueName: \"kubernetes.io/projected/d8b64a93-b139-429e-87fc-28428abaf0f5-kube-api-access-5bsqf\") pod \"nova-cell0-db-create-brf2x\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.206995 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8b64a93-b139-429e-87fc-28428abaf0f5-operator-scripts\") pod \"nova-cell0-db-create-brf2x\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.207049 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-operator-scripts\") pod \"nova-api-db-create-pxhlf\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.218838 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b29e-account-create-update-56crs"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.229650 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.232719 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.248799 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb47j\" (UniqueName: \"kubernetes.io/projected/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-kube-api-access-xb47j\") pod \"nova-api-db-create-pxhlf\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.256565 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b29e-account-create-update-56crs"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.276770 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9xbqd"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.278637 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.315682 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b798t\" (UniqueName: \"kubernetes.io/projected/4278500e-8eaf-47d6-a746-d23a33cc2603-kube-api-access-b798t\") pod \"nova-cell1-db-create-9xbqd\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.316028 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bsqf\" (UniqueName: \"kubernetes.io/projected/d8b64a93-b139-429e-87fc-28428abaf0f5-kube-api-access-5bsqf\") pod \"nova-cell0-db-create-brf2x\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.316693 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8b64a93-b139-429e-87fc-28428abaf0f5-operator-scripts\") pod \"nova-cell0-db-create-brf2x\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.316890 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcd9\" (UniqueName: \"kubernetes.io/projected/4fc0009b-f413-4c14-829a-e3ffa344de3c-kube-api-access-xkcd9\") pod \"nova-api-b29e-account-create-update-56crs\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.317019 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4278500e-8eaf-47d6-a746-d23a33cc2603-operator-scripts\") pod \"nova-cell1-db-create-9xbqd\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.319204 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc0009b-f413-4c14-829a-e3ffa344de3c-operator-scripts\") pod \"nova-api-b29e-account-create-update-56crs\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.317647 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8b64a93-b139-429e-87fc-28428abaf0f5-operator-scripts\") pod \"nova-cell0-db-create-brf2x\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.345714 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bsqf\" (UniqueName: \"kubernetes.io/projected/d8b64a93-b139-429e-87fc-28428abaf0f5-kube-api-access-5bsqf\") pod \"nova-cell0-db-create-brf2x\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.349706 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9xbqd"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.350265 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.393037 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a651-account-create-update-mkmn4"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.394800 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.397645 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.409344 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a651-account-create-update-mkmn4"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.427586 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkcd9\" (UniqueName: \"kubernetes.io/projected/4fc0009b-f413-4c14-829a-e3ffa344de3c-kube-api-access-xkcd9\") pod \"nova-api-b29e-account-create-update-56crs\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.427656 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4278500e-8eaf-47d6-a746-d23a33cc2603-operator-scripts\") pod \"nova-cell1-db-create-9xbqd\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.427715 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc0009b-f413-4c14-829a-e3ffa344de3c-operator-scripts\") pod \"nova-api-b29e-account-create-update-56crs\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.427769 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/686cc9da-301b-40e3-ae00-165f01c28654-operator-scripts\") pod \"nova-cell0-a651-account-create-update-mkmn4\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.427816 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b798t\" (UniqueName: \"kubernetes.io/projected/4278500e-8eaf-47d6-a746-d23a33cc2603-kube-api-access-b798t\") pod \"nova-cell1-db-create-9xbqd\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.429907 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4278500e-8eaf-47d6-a746-d23a33cc2603-operator-scripts\") pod \"nova-cell1-db-create-9xbqd\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.431927 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc0009b-f413-4c14-829a-e3ffa344de3c-operator-scripts\") pod \"nova-api-b29e-account-create-update-56crs\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.433935 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbwb\" (UniqueName: \"kubernetes.io/projected/686cc9da-301b-40e3-ae00-165f01c28654-kube-api-access-5fbwb\") pod \"nova-cell0-a651-account-create-update-mkmn4\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.460376 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b798t\" (UniqueName: \"kubernetes.io/projected/4278500e-8eaf-47d6-a746-d23a33cc2603-kube-api-access-b798t\") pod \"nova-cell1-db-create-9xbqd\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.460442 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkcd9\" (UniqueName: \"kubernetes.io/projected/4fc0009b-f413-4c14-829a-e3ffa344de3c-kube-api-access-xkcd9\") pod \"nova-api-b29e-account-create-update-56crs\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.460966 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.532235 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.538400 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/686cc9da-301b-40e3-ae00-165f01c28654-operator-scripts\") pod \"nova-cell0-a651-account-create-update-mkmn4\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.538535 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fbwb\" (UniqueName: \"kubernetes.io/projected/686cc9da-301b-40e3-ae00-165f01c28654-kube-api-access-5fbwb\") pod \"nova-cell0-a651-account-create-update-mkmn4\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.539713 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/686cc9da-301b-40e3-ae00-165f01c28654-operator-scripts\") pod \"nova-cell0-a651-account-create-update-mkmn4\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.546750 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.568487 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fbwb\" (UniqueName: \"kubernetes.io/projected/686cc9da-301b-40e3-ae00-165f01c28654-kube-api-access-5fbwb\") pod \"nova-cell0-a651-account-create-update-mkmn4\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.592150 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640358 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-run-httpd\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640491 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-scripts\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640624 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-log-httpd\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640774 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-combined-ca-bundle\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640816 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrtzt\" (UniqueName: \"kubernetes.io/projected/a5422c4e-3ec1-4a83-af7c-b89fb013964c-kube-api-access-jrtzt\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640861 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.640874 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-sg-core-conf-yaml\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.641165 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-config-data\") pod \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\" (UID: \"a5422c4e-3ec1-4a83-af7c-b89fb013964c\") " Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.641491 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.643383 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.643415 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5422c4e-3ec1-4a83-af7c-b89fb013964c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.646355 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a2df-account-create-update-m6twc"] Jan 22 15:46:06 crc kubenswrapper[4825]: E0122 15:46:06.651034 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-notification-agent" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.651068 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-notification-agent" Jan 22 15:46:06 crc kubenswrapper[4825]: E0122 15:46:06.651088 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="sg-core" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.651100 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="sg-core" Jan 22 15:46:06 crc kubenswrapper[4825]: E0122 15:46:06.651134 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="proxy-httpd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.651140 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="proxy-httpd" Jan 22 15:46:06 crc kubenswrapper[4825]: E0122 15:46:06.651187 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-central-agent" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.651193 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-central-agent" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.665352 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-central-agent" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.665413 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="sg-core" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.665432 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="ceilometer-notification-agent" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.665458 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerName="proxy-httpd" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.673286 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-scripts" (OuterVolumeSpecName: "scripts") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.686438 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5422c4e-3ec1-4a83-af7c-b89fb013964c-kube-api-access-jrtzt" (OuterVolumeSpecName: "kube-api-access-jrtzt") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "kube-api-access-jrtzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.708385 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.712774 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.712778 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a2df-account-create-update-m6twc"] Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.755508 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.768011 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ae6c15-5c71-4c86-ac9f-1df49436e099-operator-scripts\") pod \"nova-cell1-a2df-account-create-update-m6twc\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.768469 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/85ae6c15-5c71-4c86-ac9f-1df49436e099-kube-api-access-wshbc\") pod \"nova-cell1-a2df-account-create-update-m6twc\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.770045 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrtzt\" (UniqueName: \"kubernetes.io/projected/a5422c4e-3ec1-4a83-af7c-b89fb013964c-kube-api-access-jrtzt\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.770072 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.770083 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.816293 4825 generic.go:334] "Generic (PLEG): container finished" podID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" containerID="96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263" exitCode=0 Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.816343 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerDied","Data":"96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263"} Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.816375 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5422c4e-3ec1-4a83-af7c-b89fb013964c","Type":"ContainerDied","Data":"8553efc5da236f7c628ea597faee75a057c7c1418c0fb24ab6acf01cde9e3d15"} Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.816398 4825 scope.go:117] "RemoveContainer" containerID="48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.816602 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.870542 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.872529 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ae6c15-5c71-4c86-ac9f-1df49436e099-operator-scripts\") pod \"nova-cell1-a2df-account-create-update-m6twc\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.872581 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/85ae6c15-5c71-4c86-ac9f-1df49436e099-kube-api-access-wshbc\") pod \"nova-cell1-a2df-account-create-update-m6twc\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.873479 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ae6c15-5c71-4c86-ac9f-1df49436e099-operator-scripts\") pod \"nova-cell1-a2df-account-create-update-m6twc\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.915007 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/85ae6c15-5c71-4c86-ac9f-1df49436e099-kube-api-access-wshbc\") pod \"nova-cell1-a2df-account-create-update-m6twc\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.944901 4825 scope.go:117] "RemoveContainer" containerID="ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.947728 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:06 crc kubenswrapper[4825]: I0122 15:46:06.984672 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.065386 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-config-data" (OuterVolumeSpecName: "config-data") pod "a5422c4e-3ec1-4a83-af7c-b89fb013964c" (UID: "a5422c4e-3ec1-4a83-af7c-b89fb013964c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.095611 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5422c4e-3ec1-4a83-af7c-b89fb013964c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.112753 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.129876 4825 scope.go:117] "RemoveContainer" containerID="788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.183838 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.226220 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.238800 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pxhlf"] Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.239411 4825 scope.go:117] "RemoveContainer" containerID="96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.361074 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.380277 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.390205 4825 scope.go:117] "RemoveContainer" containerID="48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.390705 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.390925 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.400334 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.402960 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-brf2x"] Jan 22 15:46:07 crc kubenswrapper[4825]: E0122 15:46:07.418177 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5\": container with ID starting with 48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5 not found: ID does not exist" containerID="48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.418223 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5"} err="failed to get container status \"48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5\": rpc error: code = NotFound desc = could not find container \"48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5\": container with ID starting with 48289b8c50cea4167aa399a7aafd100db8026020c62c57120572fcd9825037b5 not found: ID does not exist" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.418251 4825 scope.go:117] "RemoveContainer" containerID="ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.420902 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.420949 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-scripts\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.420972 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.421021 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-run-httpd\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.421282 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-config-data\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.421313 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-log-httpd\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.421342 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ln2f\" (UniqueName: \"kubernetes.io/projected/a0655ea6-5383-4082-9129-1eedbc0f2336-kube-api-access-2ln2f\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: E0122 15:46:07.422637 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e\": container with ID starting with ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e not found: ID does not exist" containerID="ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.422696 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e"} err="failed to get container status \"ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e\": rpc error: code = NotFound desc = could not find container \"ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e\": container with ID starting with ada01718aaab54b83c85b19f33d1b4caa194b97aa66d6606c778fd5b2b7bd89e not found: ID does not exist" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.422725 4825 scope.go:117] "RemoveContainer" containerID="788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a" Jan 22 15:46:07 crc kubenswrapper[4825]: E0122 15:46:07.424180 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a\": container with ID starting with 788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a not found: ID does not exist" containerID="788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.424204 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a"} err="failed to get container status \"788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a\": rpc error: code = NotFound desc = could not find container \"788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a\": container with ID starting with 788c13434b8e652cb0879f2761f468178b377338ec1e960723221bd72313e91a not found: ID does not exist" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.424229 4825 scope.go:117] "RemoveContainer" containerID="96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263" Jan 22 15:46:07 crc kubenswrapper[4825]: E0122 15:46:07.427587 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263\": container with ID starting with 96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263 not found: ID does not exist" containerID="96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.427613 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263"} err="failed to get container status \"96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263\": rpc error: code = NotFound desc = could not find container \"96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263\": container with ID starting with 96ec70ff70beea89ca867fcdd350f535f490c35abee523dd9d1b42fd260fa263 not found: ID does not exist" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523009 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-config-data\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523070 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-log-httpd\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523102 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ln2f\" (UniqueName: \"kubernetes.io/projected/a0655ea6-5383-4082-9129-1eedbc0f2336-kube-api-access-2ln2f\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523130 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523152 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-scripts\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523171 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523209 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-run-httpd\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.523784 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-run-httpd\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.528484 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-log-httpd\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.536286 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-config-data\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.536962 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.546896 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-scripts\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.558194 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.574121 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ln2f\" (UniqueName: \"kubernetes.io/projected/a0655ea6-5383-4082-9129-1eedbc0f2336-kube-api-access-2ln2f\") pod \"ceilometer-0\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.654774 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5422c4e-3ec1-4a83-af7c-b89fb013964c" path="/var/lib/kubelet/pods/a5422c4e-3ec1-4a83-af7c-b89fb013964c/volumes" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.721140 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9xbqd"] Jan 22 15:46:07 crc kubenswrapper[4825]: W0122 15:46:07.735538 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fc0009b_f413_4c14_829a_e3ffa344de3c.slice/crio-3afb1db8ef42b5ba472b570776487c3651fb21bffa227d97c7f83cd4fd0251de WatchSource:0}: Error finding container 3afb1db8ef42b5ba472b570776487c3651fb21bffa227d97c7f83cd4fd0251de: Status 404 returned error can't find the container with id 3afb1db8ef42b5ba472b570776487c3651fb21bffa227d97c7f83cd4fd0251de Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.749048 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b29e-account-create-update-56crs"] Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.810046 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.898212 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brf2x" event={"ID":"d8b64a93-b139-429e-87fc-28428abaf0f5","Type":"ContainerStarted","Data":"afbfbad4059a0203800b720b3e539278b8f89e38ca57d35e6d1d730608f53647"} Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.961238 4825 generic.go:334] "Generic (PLEG): container finished" podID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerID="aca2265505e3d7bf617ccb5f32f2e1e77e1a1fdd3326cfdb46be0e12e5685489" exitCode=0 Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.961325 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5e86f1cc-a4dd-4f8f-b9b3-18806405875a","Type":"ContainerDied","Data":"aca2265505e3d7bf617ccb5f32f2e1e77e1a1fdd3326cfdb46be0e12e5685489"} Jan 22 15:46:07 crc kubenswrapper[4825]: I0122 15:46:07.977953 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxhlf" event={"ID":"2b51ad21-4de0-4a11-9859-c69b78c5c9fe","Type":"ContainerStarted","Data":"69925b76e40a1630112558bc381959f839a22fb4e03dcc14a0b2a5214c23fa0e"} Jan 22 15:46:08 crc kubenswrapper[4825]: I0122 15:46:08.008834 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a651-account-create-update-mkmn4"] Jan 22 15:46:08 crc kubenswrapper[4825]: I0122 15:46:08.024755 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9xbqd" event={"ID":"4278500e-8eaf-47d6-a746-d23a33cc2603","Type":"ContainerStarted","Data":"3680c3eec5934d792bb74507523064df957f9f85f615cf990a1a4778c9c84ecc"} Jan 22 15:46:08 crc kubenswrapper[4825]: I0122 15:46:08.045691 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a2df-account-create-update-m6twc"] Jan 22 15:46:08 crc kubenswrapper[4825]: I0122 15:46:08.071526 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b29e-account-create-update-56crs" event={"ID":"4fc0009b-f413-4c14-829a-e3ffa344de3c","Type":"ContainerStarted","Data":"3afb1db8ef42b5ba472b570776487c3651fb21bffa227d97c7f83cd4fd0251de"} Jan 22 15:46:08 crc kubenswrapper[4825]: W0122 15:46:08.130954 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85ae6c15_5c71_4c86_ac9f_1df49436e099.slice/crio-0f351efb4e6844458a45eaa7c3bc2a6cd6871f6126714047552ca6b9a7dc63ea WatchSource:0}: Error finding container 0f351efb4e6844458a45eaa7c3bc2a6cd6871f6126714047552ca6b9a7dc63ea: Status 404 returned error can't find the container with id 0f351efb4e6844458a45eaa7c3bc2a6cd6871f6126714047552ca6b9a7dc63ea Jan 22 15:46:08 crc kubenswrapper[4825]: I0122 15:46:08.998447 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.120862 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxhlf" event={"ID":"2b51ad21-4de0-4a11-9859-c69b78c5c9fe","Type":"ContainerStarted","Data":"824b7b6567052d092268994afb7b5d5cf841f62c83005488341eb14624cebce8"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.142798 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" event={"ID":"85ae6c15-5c71-4c86-ac9f-1df49436e099","Type":"ContainerStarted","Data":"0f351efb4e6844458a45eaa7c3bc2a6cd6871f6126714047552ca6b9a7dc63ea"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.156907 4825 generic.go:334] "Generic (PLEG): container finished" podID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerID="ef37cd4c07f479e49fce30a552f1592858bb9c5965d59c1a3c43b769e75c9029" exitCode=0 Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.160348 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63da9616-1db2-49a5-8591-9c8bdbbb43a3","Type":"ContainerDied","Data":"ef37cd4c07f479e49fce30a552f1592858bb9c5965d59c1a3c43b769e75c9029"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.161435 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-pxhlf" podStartSLOduration=4.161419757 podStartE2EDuration="4.161419757s" podCreationTimestamp="2026-01-22 15:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:46:09.156139236 +0000 UTC m=+1315.917666146" watchObservedRunningTime="2026-01-22 15:46:09.161419757 +0000 UTC m=+1315.922946667" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.166299 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" event={"ID":"686cc9da-301b-40e3-ae00-165f01c28654","Type":"ContainerStarted","Data":"084a0ce81c8438d855832ff4d6986ac1df30545a5f5cd7df7a52b302e71f522c"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.168513 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4709fedd-37c2-4afa-b34d-347f46586c55","Type":"ContainerStarted","Data":"319dfb27e1e2201da1cb99e8f4e28700c32346b823afea9527a9e4a6ed927da1"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.193694 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brf2x" event={"ID":"d8b64a93-b139-429e-87fc-28428abaf0f5","Type":"ContainerStarted","Data":"f4af1435c38998a58198434b0dd1bd9a82c869f68719dc9fc4ae0fd2dd2e9cf9"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.216218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerStarted","Data":"f2f510a2c45090e3472cbe7f7665fbff94ee07a138383e369bcde85e437ef794"} Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.224529 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.258748794 podStartE2EDuration="40.224505082s" podCreationTimestamp="2026-01-22 15:45:29 +0000 UTC" firstStartedPulling="2026-01-22 15:45:30.614130537 +0000 UTC m=+1277.375657447" lastFinishedPulling="2026-01-22 15:46:07.579886825 +0000 UTC m=+1314.341413735" observedRunningTime="2026-01-22 15:46:09.214664762 +0000 UTC m=+1315.976191672" watchObservedRunningTime="2026-01-22 15:46:09.224505082 +0000 UTC m=+1315.986031992" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.271857 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-brf2x" podStartSLOduration=3.271834088 podStartE2EDuration="3.271834088s" podCreationTimestamp="2026-01-22 15:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:46:09.255400941 +0000 UTC m=+1316.016927851" watchObservedRunningTime="2026-01-22 15:46:09.271834088 +0000 UTC m=+1316.033360998" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.604281 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.755776 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-logs\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756121 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-scripts\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756159 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-internal-tls-certs\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756256 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-config-data\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-combined-ca-bundle\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756350 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qnjb\" (UniqueName: \"kubernetes.io/projected/63da9616-1db2-49a5-8591-9c8bdbbb43a3-kube-api-access-6qnjb\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756446 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-logs" (OuterVolumeSpecName: "logs") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756559 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-httpd-run\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.756757 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\" (UID: \"63da9616-1db2-49a5-8591-9c8bdbbb43a3\") " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.757457 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.760673 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-scripts" (OuterVolumeSpecName: "scripts") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.762064 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.803960 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63da9616-1db2-49a5-8591-9c8bdbbb43a3-kube-api-access-6qnjb" (OuterVolumeSpecName: "kube-api-access-6qnjb") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "kube-api-access-6qnjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.813489 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e" (OuterVolumeSpecName: "glance") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "pvc-4e49a725-3f57-44a5-bfa8-df35534f326e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.864715 4825 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63da9616-1db2-49a5-8591-9c8bdbbb43a3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.864809 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") on node \"crc\" " Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.864825 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.864836 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qnjb\" (UniqueName: \"kubernetes.io/projected/63da9616-1db2-49a5-8591-9c8bdbbb43a3-kube-api-access-6qnjb\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.867915 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.923266 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.946487 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-config-data" (OuterVolumeSpecName: "config-data") pod "63da9616-1db2-49a5-8591-9c8bdbbb43a3" (UID: "63da9616-1db2-49a5-8591-9c8bdbbb43a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.960289 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.960454 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4e49a725-3f57-44a5-bfa8-df35534f326e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e") on node "crc" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.967275 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.967313 4825 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.967324 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:09 crc kubenswrapper[4825]: I0122 15:46:09.967332 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63da9616-1db2-49a5-8591-9c8bdbbb43a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.023059 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.170827 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-combined-ca-bundle\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171213 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-logs\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171296 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-scripts\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171322 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-httpd-run\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171540 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171693 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmgwb\" (UniqueName: \"kubernetes.io/projected/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-kube-api-access-qmgwb\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171740 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-config-data\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.171808 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-public-tls-certs\") pod \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\" (UID: \"5e86f1cc-a4dd-4f8f-b9b3-18806405875a\") " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.172012 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-logs" (OuterVolumeSpecName: "logs") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.172292 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.172585 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.172614 4825 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.191014 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-kube-api-access-qmgwb" (OuterVolumeSpecName: "kube-api-access-qmgwb") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "kube-api-access-qmgwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.203732 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-scripts" (OuterVolumeSpecName: "scripts") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.262246 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b29e-account-create-update-56crs" event={"ID":"4fc0009b-f413-4c14-829a-e3ffa344de3c","Type":"ContainerStarted","Data":"4de9cf3ef4f101bd4783b164914d09f56db733c890e5b78d4da7a509dab72f61"} Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.281250 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmgwb\" (UniqueName: \"kubernetes.io/projected/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-kube-api-access-qmgwb\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.281395 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.294348 4825 generic.go:334] "Generic (PLEG): container finished" podID="d8b64a93-b139-429e-87fc-28428abaf0f5" containerID="f4af1435c38998a58198434b0dd1bd9a82c869f68719dc9fc4ae0fd2dd2e9cf9" exitCode=0 Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.294453 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brf2x" event={"ID":"d8b64a93-b139-429e-87fc-28428abaf0f5","Type":"ContainerDied","Data":"f4af1435c38998a58198434b0dd1bd9a82c869f68719dc9fc4ae0fd2dd2e9cf9"} Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.299720 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b29e-account-create-update-56crs" podStartSLOduration=4.299699017 podStartE2EDuration="4.299699017s" podCreationTimestamp="2026-01-22 15:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:46:10.284066802 +0000 UTC m=+1317.045593712" watchObservedRunningTime="2026-01-22 15:46:10.299699017 +0000 UTC m=+1317.061225927" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.312438 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5e86f1cc-a4dd-4f8f-b9b3-18806405875a","Type":"ContainerDied","Data":"ce5e8a4589d1dccbbff53e439a2daadb44bd9b7499dab0d133cbd995fd47ef27"} Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.312488 4825 scope.go:117] "RemoveContainer" containerID="aca2265505e3d7bf617ccb5f32f2e1e77e1a1fdd3326cfdb46be0e12e5685489" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.312617 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.345514 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.202:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.345780 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.202:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.348054 4825 scope.go:117] "RemoveContainer" containerID="8c3241f400a70e7b8917c2a01619d54ee026b3065b3d232b50b0afbece8406e4" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.348366 4825 generic.go:334] "Generic (PLEG): container finished" podID="2b51ad21-4de0-4a11-9859-c69b78c5c9fe" containerID="824b7b6567052d092268994afb7b5d5cf841f62c83005488341eb14624cebce8" exitCode=0 Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.348437 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxhlf" event={"ID":"2b51ad21-4de0-4a11-9859-c69b78c5c9fe","Type":"ContainerDied","Data":"824b7b6567052d092268994afb7b5d5cf841f62c83005488341eb14624cebce8"} Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.348870 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.386169 4825 generic.go:334] "Generic (PLEG): container finished" podID="4278500e-8eaf-47d6-a746-d23a33cc2603" containerID="0dac02d39650005d6d6b79813baff509f0f3238c0c0d13242237a506fd455b95" exitCode=0 Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.386320 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9xbqd" event={"ID":"4278500e-8eaf-47d6-a746-d23a33cc2603","Type":"ContainerDied","Data":"0dac02d39650005d6d6b79813baff509f0f3238c0c0d13242237a506fd455b95"} Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.437201 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63da9616-1db2-49a5-8591-9c8bdbbb43a3","Type":"ContainerDied","Data":"c7d077adda3a5a2948817335b239051015d20878806e0678989c395a0369bbaa"} Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.437292 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.442185 4825 scope.go:117] "RemoveContainer" containerID="ef37cd4c07f479e49fce30a552f1592858bb9c5965d59c1a3c43b769e75c9029" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.469790 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" podStartSLOduration=4.469771846 podStartE2EDuration="4.469771846s" podCreationTimestamp="2026-01-22 15:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:46:10.468397357 +0000 UTC m=+1317.229924267" watchObservedRunningTime="2026-01-22 15:46:10.469771846 +0000 UTC m=+1317.231298756" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.553262 4825 scope.go:117] "RemoveContainer" containerID="fe96204d9b101fc9c165f595318955f2f332311a76b05f8e0cbd8b217ee36320" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.659291 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.736669 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.797112 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:46:10 crc kubenswrapper[4825]: E0122 15:46:10.798638 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-httpd" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.798669 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-httpd" Jan 22 15:46:10 crc kubenswrapper[4825]: E0122 15:46:10.798730 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-log" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.798740 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-log" Jan 22 15:46:10 crc kubenswrapper[4825]: E0122 15:46:10.798771 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-log" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.798781 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-log" Jan 22 15:46:10 crc kubenswrapper[4825]: E0122 15:46:10.798820 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-httpd" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.798830 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-httpd" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.799503 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-log" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.799540 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" containerName="glance-httpd" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.799575 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-httpd" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.799613 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" containerName="glance-log" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.803676 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.813079 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95" (OuterVolumeSpecName: "glance") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.813386 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.813460 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.870590 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.871861 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") on node \"crc\" " Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.888439 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.985607 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.985654 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21a15af5-89b0-4b01-818e-318d7930e7cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.985757 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21a15af5-89b0-4b01-818e-318d7930e7cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.985853 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.985883 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjz6v\" (UniqueName: \"kubernetes.io/projected/21a15af5-89b0-4b01-818e-318d7930e7cf-kube-api-access-tjz6v\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.986142 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.986169 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.986256 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.986387 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.995056 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:46:10 crc kubenswrapper[4825]: I0122 15:46:10.995216 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95") on node "crc" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.028821 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088474 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21a15af5-89b0-4b01-818e-318d7930e7cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088585 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088620 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjz6v\" (UniqueName: \"kubernetes.io/projected/21a15af5-89b0-4b01-818e-318d7930e7cf-kube-api-access-tjz6v\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088737 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088768 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088826 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088911 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.088945 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21a15af5-89b0-4b01-818e-318d7930e7cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.089087 4825 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.089113 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.089266 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21a15af5-89b0-4b01-818e-318d7930e7cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.094867 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21a15af5-89b0-4b01-818e-318d7930e7cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.098499 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.100250 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.101228 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7722be3f00b9a9940eea4c247f06e25a83500c17d2f465a46607559e6e786615/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.101897 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.103558 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.104496 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a15af5-89b0-4b01-818e-318d7930e7cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.108299 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-config-data" (OuterVolumeSpecName: "config-data") pod "5e86f1cc-a4dd-4f8f-b9b3-18806405875a" (UID: "5e86f1cc-a4dd-4f8f-b9b3-18806405875a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.126099 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjz6v\" (UniqueName: \"kubernetes.io/projected/21a15af5-89b0-4b01-818e-318d7930e7cf-kube-api-access-tjz6v\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.191258 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e86f1cc-a4dd-4f8f-b9b3-18806405875a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.208261 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e49a725-3f57-44a5-bfa8-df35534f326e\") pod \"glance-default-internal-api-0\" (UID: \"21a15af5-89b0-4b01-818e-318d7930e7cf\") " pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.243666 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.334307 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.354689 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.372103 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.374051 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.382659 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.382934 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.399463 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.468219 4825 generic.go:334] "Generic (PLEG): container finished" podID="4fc0009b-f413-4c14-829a-e3ffa344de3c" containerID="4de9cf3ef4f101bd4783b164914d09f56db733c890e5b78d4da7a509dab72f61" exitCode=0 Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.468307 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b29e-account-create-update-56crs" event={"ID":"4fc0009b-f413-4c14-829a-e3ffa344de3c","Type":"ContainerDied","Data":"4de9cf3ef4f101bd4783b164914d09f56db733c890e5b78d4da7a509dab72f61"} Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.489751 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerStarted","Data":"2539e5f70d32e284c60e666842e9a5fc27657f9405215a009e723c1c7bcef665"} Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.489804 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerStarted","Data":"881ad51114925b82d3e2b575b5eac6570265bbe6ca88e75e9258c0acacbd9610"} Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.499385 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.499579 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.499655 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa216742-9142-43e8-a320-47c91f44da7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.500187 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa216742-9142-43e8-a320-47c91f44da7e-logs\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.500277 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.500344 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4479c\" (UniqueName: \"kubernetes.io/projected/fa216742-9142-43e8-a320-47c91f44da7e-kube-api-access-4479c\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.500493 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.500534 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.525097 4825 generic.go:334] "Generic (PLEG): container finished" podID="85ae6c15-5c71-4c86-ac9f-1df49436e099" containerID="204018252ace59351294f08b4034ade05e14d1d8787d3fb1ef31951838542bef" exitCode=0 Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.619860 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.619910 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.619965 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.620032 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.620051 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa216742-9142-43e8-a320-47c91f44da7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.620120 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa216742-9142-43e8-a320-47c91f44da7e-logs\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.620147 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.620172 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4479c\" (UniqueName: \"kubernetes.io/projected/fa216742-9142-43e8-a320-47c91f44da7e-kube-api-access-4479c\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.634741 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa216742-9142-43e8-a320-47c91f44da7e-logs\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.635058 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa216742-9142-43e8-a320-47c91f44da7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.636026 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.656536 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.657101 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.657137 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2eba76fcf8acb10fcd1d5de55fcc46feaa499f2cf7d93b353c025f405bcc2f19/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.657790 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e86f1cc-a4dd-4f8f-b9b3-18806405875a" path="/var/lib/kubelet/pods/5e86f1cc-a4dd-4f8f-b9b3-18806405875a/volumes" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.658799 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63da9616-1db2-49a5-8591-9c8bdbbb43a3" path="/var/lib/kubelet/pods/63da9616-1db2-49a5-8591-9c8bdbbb43a3/volumes" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.663240 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" event={"ID":"85ae6c15-5c71-4c86-ac9f-1df49436e099","Type":"ContainerDied","Data":"204018252ace59351294f08b4034ade05e14d1d8787d3fb1ef31951838542bef"} Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.663287 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" event={"ID":"686cc9da-301b-40e3-ae00-165f01c28654","Type":"ContainerStarted","Data":"1047a606b1ee037b80d24b9ee14842b9a940b706b46850845f6d31e8cb357cea"} Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.667318 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.667902 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa216742-9142-43e8-a320-47c91f44da7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.683740 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4479c\" (UniqueName: \"kubernetes.io/projected/fa216742-9142-43e8-a320-47c91f44da7e-kube-api-access-4479c\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.763103 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95277569-3c13-44b9-b1e5-2eacbcc2df95\") pod \"glance-default-external-api-0\" (UID: \"fa216742-9142-43e8-a320-47c91f44da7e\") " pod="openstack/glance-default-external-api-0" Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.821966 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:11 crc kubenswrapper[4825]: I0122 15:46:11.848969 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.430855 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.492544 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.521813 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.558676 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bsqf\" (UniqueName: \"kubernetes.io/projected/d8b64a93-b139-429e-87fc-28428abaf0f5-kube-api-access-5bsqf\") pod \"d8b64a93-b139-429e-87fc-28428abaf0f5\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.558729 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8b64a93-b139-429e-87fc-28428abaf0f5-operator-scripts\") pod \"d8b64a93-b139-429e-87fc-28428abaf0f5\" (UID: \"d8b64a93-b139-429e-87fc-28428abaf0f5\") " Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.559954 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b64a93-b139-429e-87fc-28428abaf0f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8b64a93-b139-429e-87fc-28428abaf0f5" (UID: "d8b64a93-b139-429e-87fc-28428abaf0f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.567256 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b64a93-b139-429e-87fc-28428abaf0f5-kube-api-access-5bsqf" (OuterVolumeSpecName: "kube-api-access-5bsqf") pod "d8b64a93-b139-429e-87fc-28428abaf0f5" (UID: "d8b64a93-b139-429e-87fc-28428abaf0f5"). InnerVolumeSpecName "kube-api-access-5bsqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.613784 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.641813 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pxhlf" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.641801 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pxhlf" event={"ID":"2b51ad21-4de0-4a11-9859-c69b78c5c9fe","Type":"ContainerDied","Data":"69925b76e40a1630112558bc381959f839a22fb4e03dcc14a0b2a5214c23fa0e"} Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.643376 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69925b76e40a1630112558bc381959f839a22fb4e03dcc14a0b2a5214c23fa0e" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.717310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb47j\" (UniqueName: \"kubernetes.io/projected/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-kube-api-access-xb47j\") pod \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.717831 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4278500e-8eaf-47d6-a746-d23a33cc2603-operator-scripts\") pod \"4278500e-8eaf-47d6-a746-d23a33cc2603\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.718101 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b798t\" (UniqueName: \"kubernetes.io/projected/4278500e-8eaf-47d6-a746-d23a33cc2603-kube-api-access-b798t\") pod \"4278500e-8eaf-47d6-a746-d23a33cc2603\" (UID: \"4278500e-8eaf-47d6-a746-d23a33cc2603\") " Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.718245 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-operator-scripts\") pod \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\" (UID: \"2b51ad21-4de0-4a11-9859-c69b78c5c9fe\") " Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.719546 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bsqf\" (UniqueName: \"kubernetes.io/projected/d8b64a93-b139-429e-87fc-28428abaf0f5-kube-api-access-5bsqf\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.719565 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8b64a93-b139-429e-87fc-28428abaf0f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.722305 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b51ad21-4de0-4a11-9859-c69b78c5c9fe" (UID: "2b51ad21-4de0-4a11-9859-c69b78c5c9fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.722898 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4278500e-8eaf-47d6-a746-d23a33cc2603-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4278500e-8eaf-47d6-a746-d23a33cc2603" (UID: "4278500e-8eaf-47d6-a746-d23a33cc2603"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.738222 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-kube-api-access-xb47j" (OuterVolumeSpecName: "kube-api-access-xb47j") pod "2b51ad21-4de0-4a11-9859-c69b78c5c9fe" (UID: "2b51ad21-4de0-4a11-9859-c69b78c5c9fe"). InnerVolumeSpecName "kube-api-access-xb47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.738456 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9xbqd" event={"ID":"4278500e-8eaf-47d6-a746-d23a33cc2603","Type":"ContainerDied","Data":"3680c3eec5934d792bb74507523064df957f9f85f615cf990a1a4778c9c84ecc"} Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.738484 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3680c3eec5934d792bb74507523064df957f9f85f615cf990a1a4778c9c84ecc" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.738584 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9xbqd" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.746812 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4278500e-8eaf-47d6-a746-d23a33cc2603-kube-api-access-b798t" (OuterVolumeSpecName: "kube-api-access-b798t") pod "4278500e-8eaf-47d6-a746-d23a33cc2603" (UID: "4278500e-8eaf-47d6-a746-d23a33cc2603"). InnerVolumeSpecName "kube-api-access-b798t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.778294 4825 generic.go:334] "Generic (PLEG): container finished" podID="686cc9da-301b-40e3-ae00-165f01c28654" containerID="1047a606b1ee037b80d24b9ee14842b9a940b706b46850845f6d31e8cb357cea" exitCode=0 Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.778736 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" event={"ID":"686cc9da-301b-40e3-ae00-165f01c28654","Type":"ContainerDied","Data":"1047a606b1ee037b80d24b9ee14842b9a940b706b46850845f6d31e8cb357cea"} Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.822602 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brf2x" event={"ID":"d8b64a93-b139-429e-87fc-28428abaf0f5","Type":"ContainerDied","Data":"afbfbad4059a0203800b720b3e539278b8f89e38ca57d35e6d1d730608f53647"} Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.822641 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afbfbad4059a0203800b720b3e539278b8f89e38ca57d35e6d1d730608f53647" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.822708 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brf2x" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.834865 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerStarted","Data":"4af9c6ff6f0bdd2490612b20f791224e5807b5ccd3db93754ff81ab8fc3a499a"} Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.840229 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21a15af5-89b0-4b01-818e-318d7930e7cf","Type":"ContainerStarted","Data":"399b9b6a43e0b3e23c39850ed56c156a80367d6e4488e4e4c7dc98f85add108f"} Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.841590 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4278500e-8eaf-47d6-a746-d23a33cc2603-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.841619 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b798t\" (UniqueName: \"kubernetes.io/projected/4278500e-8eaf-47d6-a746-d23a33cc2603-kube-api-access-b798t\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.842747 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.842758 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb47j\" (UniqueName: \"kubernetes.io/projected/2b51ad21-4de0-4a11-9859-c69b78c5c9fe-kube-api-access-xb47j\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:12 crc kubenswrapper[4825]: I0122 15:46:12.892879 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.464594 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.585004 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/686cc9da-301b-40e3-ae00-165f01c28654-operator-scripts\") pod \"686cc9da-301b-40e3-ae00-165f01c28654\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.585517 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fbwb\" (UniqueName: \"kubernetes.io/projected/686cc9da-301b-40e3-ae00-165f01c28654-kube-api-access-5fbwb\") pod \"686cc9da-301b-40e3-ae00-165f01c28654\" (UID: \"686cc9da-301b-40e3-ae00-165f01c28654\") " Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.589375 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686cc9da-301b-40e3-ae00-165f01c28654-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "686cc9da-301b-40e3-ae00-165f01c28654" (UID: "686cc9da-301b-40e3-ae00-165f01c28654"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.654192 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686cc9da-301b-40e3-ae00-165f01c28654-kube-api-access-5fbwb" (OuterVolumeSpecName: "kube-api-access-5fbwb") pod "686cc9da-301b-40e3-ae00-165f01c28654" (UID: "686cc9da-301b-40e3-ae00-165f01c28654"). InnerVolumeSpecName "kube-api-access-5fbwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.691037 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/686cc9da-301b-40e3-ae00-165f01c28654-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.691069 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fbwb\" (UniqueName: \"kubernetes.io/projected/686cc9da-301b-40e3-ae00-165f01c28654-kube-api-access-5fbwb\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.925004 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" event={"ID":"686cc9da-301b-40e3-ae00-165f01c28654","Type":"ContainerDied","Data":"084a0ce81c8438d855832ff4d6986ac1df30545a5f5cd7df7a52b302e71f522c"} Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.925493 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084a0ce81c8438d855832ff4d6986ac1df30545a5f5cd7df7a52b302e71f522c" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.925629 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a651-account-create-update-mkmn4" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.933468 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b29e-account-create-update-56crs" event={"ID":"4fc0009b-f413-4c14-829a-e3ffa344de3c","Type":"ContainerDied","Data":"3afb1db8ef42b5ba472b570776487c3651fb21bffa227d97c7f83cd4fd0251de"} Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.933500 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3afb1db8ef42b5ba472b570776487c3651fb21bffa227d97c7f83cd4fd0251de" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.949527 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa216742-9142-43e8-a320-47c91f44da7e","Type":"ContainerStarted","Data":"548968af16efc44dc47f6db0f4b180999619a613c5b6f8b13e0bc83a5597e917"} Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.956697 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.967960 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" event={"ID":"85ae6c15-5c71-4c86-ac9f-1df49436e099","Type":"ContainerDied","Data":"0f351efb4e6844458a45eaa7c3bc2a6cd6871f6126714047552ca6b9a7dc63ea"} Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.968278 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f351efb4e6844458a45eaa7c3bc2a6cd6871f6126714047552ca6b9a7dc63ea" Jan 22 15:46:13 crc kubenswrapper[4825]: I0122 15:46:13.971707 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.011830 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkcd9\" (UniqueName: \"kubernetes.io/projected/4fc0009b-f413-4c14-829a-e3ffa344de3c-kube-api-access-xkcd9\") pod \"4fc0009b-f413-4c14-829a-e3ffa344de3c\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.011879 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc0009b-f413-4c14-829a-e3ffa344de3c-operator-scripts\") pod \"4fc0009b-f413-4c14-829a-e3ffa344de3c\" (UID: \"4fc0009b-f413-4c14-829a-e3ffa344de3c\") " Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.011970 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/85ae6c15-5c71-4c86-ac9f-1df49436e099-kube-api-access-wshbc\") pod \"85ae6c15-5c71-4c86-ac9f-1df49436e099\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.012089 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ae6c15-5c71-4c86-ac9f-1df49436e099-operator-scripts\") pod \"85ae6c15-5c71-4c86-ac9f-1df49436e099\" (UID: \"85ae6c15-5c71-4c86-ac9f-1df49436e099\") " Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.013290 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ae6c15-5c71-4c86-ac9f-1df49436e099-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85ae6c15-5c71-4c86-ac9f-1df49436e099" (UID: "85ae6c15-5c71-4c86-ac9f-1df49436e099"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.013292 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc0009b-f413-4c14-829a-e3ffa344de3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fc0009b-f413-4c14-829a-e3ffa344de3c" (UID: "4fc0009b-f413-4c14-829a-e3ffa344de3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.030810 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc0009b-f413-4c14-829a-e3ffa344de3c-kube-api-access-xkcd9" (OuterVolumeSpecName: "kube-api-access-xkcd9") pod "4fc0009b-f413-4c14-829a-e3ffa344de3c" (UID: "4fc0009b-f413-4c14-829a-e3ffa344de3c"). InnerVolumeSpecName "kube-api-access-xkcd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.030901 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ae6c15-5c71-4c86-ac9f-1df49436e099-kube-api-access-wshbc" (OuterVolumeSpecName: "kube-api-access-wshbc") pod "85ae6c15-5c71-4c86-ac9f-1df49436e099" (UID: "85ae6c15-5c71-4c86-ac9f-1df49436e099"). InnerVolumeSpecName "kube-api-access-wshbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.116598 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ae6c15-5c71-4c86-ac9f-1df49436e099-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.116648 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkcd9\" (UniqueName: \"kubernetes.io/projected/4fc0009b-f413-4c14-829a-e3ffa344de3c-kube-api-access-xkcd9\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.116662 4825 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc0009b-f413-4c14-829a-e3ffa344de3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.116673 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wshbc\" (UniqueName: \"kubernetes.io/projected/85ae6c15-5c71-4c86-ac9f-1df49436e099-kube-api-access-wshbc\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.629119 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="c8f0065f-b8fd-4a5d-a098-2db018daf9ee" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.206:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:46:14 crc kubenswrapper[4825]: I0122 15:46:14.750476 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.018970 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21a15af5-89b0-4b01-818e-318d7930e7cf","Type":"ContainerStarted","Data":"7a97db7e45febcd4553dfb3eeca149c68bdbfad7ee8f9de35d4ad2be41ff43dd"} Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.033301 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa216742-9142-43e8-a320-47c91f44da7e","Type":"ContainerStarted","Data":"7bd39d48fb3e9cf8bc75ad8209587db7796106f7c7a9b58c7d6bfd23cd102ccc"} Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.058584 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a2df-account-create-update-m6twc" Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.062754 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-central-agent" containerID="cri-o://881ad51114925b82d3e2b575b5eac6570265bbe6ca88e75e9258c0acacbd9610" gracePeriod=30 Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.063361 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="proxy-httpd" containerID="cri-o://e6f34d6720d6385e444c1a136e378f4feaca6bafe499e1c0adc29b5efb9a2abc" gracePeriod=30 Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.063413 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="sg-core" containerID="cri-o://4af9c6ff6f0bdd2490612b20f791224e5807b5ccd3db93754ff81ab8fc3a499a" gracePeriod=30 Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.063452 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-notification-agent" containerID="cri-o://2539e5f70d32e284c60e666842e9a5fc27657f9405215a009e723c1c7bcef665" gracePeriod=30 Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.063612 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerStarted","Data":"e6f34d6720d6385e444c1a136e378f4feaca6bafe499e1c0adc29b5efb9a2abc"} Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.063645 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.063695 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b29e-account-create-update-56crs" Jan 22 15:46:15 crc kubenswrapper[4825]: I0122 15:46:15.098045 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8106180099999998 podStartE2EDuration="8.098023603s" podCreationTimestamp="2026-01-22 15:46:07 +0000 UTC" firstStartedPulling="2026-01-22 15:46:09.021490445 +0000 UTC m=+1315.783017355" lastFinishedPulling="2026-01-22 15:46:14.308896038 +0000 UTC m=+1321.070422948" observedRunningTime="2026-01-22 15:46:15.090383856 +0000 UTC m=+1321.851910756" watchObservedRunningTime="2026-01-22 15:46:15.098023603 +0000 UTC m=+1321.859550513" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.069159 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa216742-9142-43e8-a320-47c91f44da7e","Type":"ContainerStarted","Data":"49bda058a0233b236f1ee0090da4f1b65559d08ff189691292756dce80fc3dbc"} Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.073378 4825 generic.go:334] "Generic (PLEG): container finished" podID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerID="e6f34d6720d6385e444c1a136e378f4feaca6bafe499e1c0adc29b5efb9a2abc" exitCode=0 Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.073406 4825 generic.go:334] "Generic (PLEG): container finished" podID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerID="4af9c6ff6f0bdd2490612b20f791224e5807b5ccd3db93754ff81ab8fc3a499a" exitCode=2 Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.073415 4825 generic.go:334] "Generic (PLEG): container finished" podID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerID="2539e5f70d32e284c60e666842e9a5fc27657f9405215a009e723c1c7bcef665" exitCode=0 Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.073452 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerDied","Data":"e6f34d6720d6385e444c1a136e378f4feaca6bafe499e1c0adc29b5efb9a2abc"} Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.073476 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerDied","Data":"4af9c6ff6f0bdd2490612b20f791224e5807b5ccd3db93754ff81ab8fc3a499a"} Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.073485 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerDied","Data":"2539e5f70d32e284c60e666842e9a5fc27657f9405215a009e723c1c7bcef665"} Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.075494 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21a15af5-89b0-4b01-818e-318d7930e7cf","Type":"ContainerStarted","Data":"ca383d28cfe5b724e8029103ef3a517381ba52c32b3d03093c0f2a28ce0d3780"} Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.087618 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.087599031 podStartE2EDuration="5.087599031s" podCreationTimestamp="2026-01-22 15:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:46:16.086264562 +0000 UTC m=+1322.847791472" watchObservedRunningTime="2026-01-22 15:46:16.087599031 +0000 UTC m=+1322.849125941" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.124916 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.124895702 podStartE2EDuration="6.124895702s" podCreationTimestamp="2026-01-22 15:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:46:16.118684085 +0000 UTC m=+1322.880211005" watchObservedRunningTime="2026-01-22 15:46:16.124895702 +0000 UTC m=+1322.886422612" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.730504 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9vs7x"] Jan 22 15:46:16 crc kubenswrapper[4825]: E0122 15:46:16.731356 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc0009b-f413-4c14-829a-e3ffa344de3c" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731383 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc0009b-f413-4c14-829a-e3ffa344de3c" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: E0122 15:46:16.731402 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b51ad21-4de0-4a11-9859-c69b78c5c9fe" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731410 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b51ad21-4de0-4a11-9859-c69b78c5c9fe" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: E0122 15:46:16.731429 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4278500e-8eaf-47d6-a746-d23a33cc2603" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731436 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4278500e-8eaf-47d6-a746-d23a33cc2603" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: E0122 15:46:16.731466 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b64a93-b139-429e-87fc-28428abaf0f5" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731474 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b64a93-b139-429e-87fc-28428abaf0f5" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: E0122 15:46:16.731485 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686cc9da-301b-40e3-ae00-165f01c28654" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731493 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="686cc9da-301b-40e3-ae00-165f01c28654" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: E0122 15:46:16.731509 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ae6c15-5c71-4c86-ac9f-1df49436e099" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731516 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ae6c15-5c71-4c86-ac9f-1df49436e099" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731761 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc0009b-f413-4c14-829a-e3ffa344de3c" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731780 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b64a93-b139-429e-87fc-28428abaf0f5" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731791 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b51ad21-4de0-4a11-9859-c69b78c5c9fe" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731802 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4278500e-8eaf-47d6-a746-d23a33cc2603" containerName="mariadb-database-create" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731811 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ae6c15-5c71-4c86-ac9f-1df49436e099" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.731823 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="686cc9da-301b-40e3-ae00-165f01c28654" containerName="mariadb-account-create-update" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.732778 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.735938 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.736323 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xt7m5" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.736505 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.758333 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9vs7x"] Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.821204 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-config-data\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.821629 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-scripts\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.821856 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-592wt\" (UniqueName: \"kubernetes.io/projected/7a02e958-e76f-4351-bf61-0b0f4ec0e410-kube-api-access-592wt\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.822286 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.924603 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.924665 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-config-data\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.924690 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-scripts\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.924737 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-592wt\" (UniqueName: \"kubernetes.io/projected/7a02e958-e76f-4351-bf61-0b0f4ec0e410-kube-api-access-592wt\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.934595 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-config-data\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.935208 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.941442 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-scripts\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:16 crc kubenswrapper[4825]: I0122 15:46:16.946492 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-592wt\" (UniqueName: \"kubernetes.io/projected/7a02e958-e76f-4351-bf61-0b0f4ec0e410-kube-api-access-592wt\") pod \"nova-cell0-conductor-db-sync-9vs7x\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:17 crc kubenswrapper[4825]: I0122 15:46:17.054362 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:46:17 crc kubenswrapper[4825]: I0122 15:46:17.590138 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9vs7x"] Jan 22 15:46:18 crc kubenswrapper[4825]: I0122 15:46:18.097468 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" event={"ID":"7a02e958-e76f-4351-bf61-0b0f4ec0e410","Type":"ContainerStarted","Data":"bdcdf3860bf17c9469defe9e9a587353a80b28747cea4fcca4b6739420087540"} Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.245043 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.245108 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.276663 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.299682 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.850432 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.854018 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.982567 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 15:46:21 crc kubenswrapper[4825]: I0122 15:46:21.993111 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 15:46:22 crc kubenswrapper[4825]: I0122 15:46:22.169716 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 15:46:22 crc kubenswrapper[4825]: I0122 15:46:22.170041 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:22 crc kubenswrapper[4825]: I0122 15:46:22.170070 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:22 crc kubenswrapper[4825]: I0122 15:46:22.170082 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 15:46:23 crc kubenswrapper[4825]: I0122 15:46:23.387034 4825 generic.go:334] "Generic (PLEG): container finished" podID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerID="881ad51114925b82d3e2b575b5eac6570265bbe6ca88e75e9258c0acacbd9610" exitCode=0 Jan 22 15:46:23 crc kubenswrapper[4825]: I0122 15:46:23.388883 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerDied","Data":"881ad51114925b82d3e2b575b5eac6570265bbe6ca88e75e9258c0acacbd9610"} Jan 22 15:46:24 crc kubenswrapper[4825]: I0122 15:46:24.401191 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:46:24 crc kubenswrapper[4825]: I0122 15:46:24.401231 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:46:25 crc kubenswrapper[4825]: I0122 15:46:25.840091 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:25 crc kubenswrapper[4825]: I0122 15:46:25.840568 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:46:25 crc kubenswrapper[4825]: I0122 15:46:25.845247 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 15:46:25 crc kubenswrapper[4825]: I0122 15:46:25.949777 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 15:46:25 crc kubenswrapper[4825]: I0122 15:46:25.950057 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 15:46:25 crc kubenswrapper[4825]: I0122 15:46:25.951189 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.649436 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0655ea6-5383-4082-9129-1eedbc0f2336","Type":"ContainerDied","Data":"f2f510a2c45090e3472cbe7f7665fbff94ee07a138383e369bcde85e437ef794"} Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.650046 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2f510a2c45090e3472cbe7f7665fbff94ee07a138383e369bcde85e437ef794" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.678662 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831249 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-sg-core-conf-yaml\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831313 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-log-httpd\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831363 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ln2f\" (UniqueName: \"kubernetes.io/projected/a0655ea6-5383-4082-9129-1eedbc0f2336-kube-api-access-2ln2f\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831456 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-combined-ca-bundle\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831482 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-scripts\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831583 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-run-httpd\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.831679 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-config-data\") pod \"a0655ea6-5383-4082-9129-1eedbc0f2336\" (UID: \"a0655ea6-5383-4082-9129-1eedbc0f2336\") " Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.832532 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.832673 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.838473 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-scripts" (OuterVolumeSpecName: "scripts") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.838577 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0655ea6-5383-4082-9129-1eedbc0f2336-kube-api-access-2ln2f" (OuterVolumeSpecName: "kube-api-access-2ln2f") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "kube-api-access-2ln2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.868486 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.936018 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.936064 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.936083 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ln2f\" (UniqueName: \"kubernetes.io/projected/a0655ea6-5383-4082-9129-1eedbc0f2336-kube-api-access-2ln2f\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.936097 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.936107 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0655ea6-5383-4082-9129-1eedbc0f2336-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.961334 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-config-data" (OuterVolumeSpecName: "config-data") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:32 crc kubenswrapper[4825]: I0122 15:46:32.971204 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0655ea6-5383-4082-9129-1eedbc0f2336" (UID: "a0655ea6-5383-4082-9129-1eedbc0f2336"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.038581 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.038616 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0655ea6-5383-4082-9129-1eedbc0f2336-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.113503 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.113762 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-592wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-9vs7x_openstack(7a02e958-e76f-4351-bf61-0b0f4ec0e410): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.116238 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" podUID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.663441 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.667091 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" podUID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.892297 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.909049 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.922842 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.924023 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-central-agent" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.924127 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-central-agent" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.924200 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="sg-core" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.924292 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="sg-core" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.924379 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="proxy-httpd" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.924441 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="proxy-httpd" Jan 22 15:46:33 crc kubenswrapper[4825]: E0122 15:46:33.924505 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-notification-agent" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.924579 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-notification-agent" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.924882 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="sg-core" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.925023 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-central-agent" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.925129 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="ceilometer-notification-agent" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.925217 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" containerName="proxy-httpd" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.927582 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.930080 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.934684 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:46:33 crc kubenswrapper[4825]: I0122 15:46:33.935146 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.079549 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-scripts\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.079734 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.079788 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-config-data\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.079913 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-run-httpd\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.080034 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8p9z\" (UniqueName: \"kubernetes.io/projected/cae271e3-f39d-4512-be27-1568327ce8f0-kube-api-access-p8p9z\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.080081 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-log-httpd\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.080105 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.182970 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8p9z\" (UniqueName: \"kubernetes.io/projected/cae271e3-f39d-4512-be27-1568327ce8f0-kube-api-access-p8p9z\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.183068 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-log-httpd\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.183092 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.183160 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-scripts\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.183249 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.183278 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-config-data\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.183362 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-run-httpd\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.184091 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-run-httpd\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.184829 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-log-httpd\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.190553 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.192311 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-config-data\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.193293 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-scripts\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.193537 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.202757 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8p9z\" (UniqueName: \"kubernetes.io/projected/cae271e3-f39d-4512-be27-1568327ce8f0-kube-api-access-p8p9z\") pod \"ceilometer-0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.258036 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:34 crc kubenswrapper[4825]: I0122 15:46:34.859485 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:34 crc kubenswrapper[4825]: W0122 15:46:34.862344 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae271e3_f39d_4512_be27_1568327ce8f0.slice/crio-7155bf55cbab353eed17a398fa67cbdcc1d84fa629baa65a4f346908e9d9854a WatchSource:0}: Error finding container 7155bf55cbab353eed17a398fa67cbdcc1d84fa629baa65a4f346908e9d9854a: Status 404 returned error can't find the container with id 7155bf55cbab353eed17a398fa67cbdcc1d84fa629baa65a4f346908e9d9854a Jan 22 15:46:35 crc kubenswrapper[4825]: I0122 15:46:35.531353 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0655ea6-5383-4082-9129-1eedbc0f2336" path="/var/lib/kubelet/pods/a0655ea6-5383-4082-9129-1eedbc0f2336/volumes" Jan 22 15:46:35 crc kubenswrapper[4825]: I0122 15:46:35.684950 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerStarted","Data":"7155bf55cbab353eed17a398fa67cbdcc1d84fa629baa65a4f346908e9d9854a"} Jan 22 15:46:36 crc kubenswrapper[4825]: I0122 15:46:36.703280 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerStarted","Data":"1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0"} Jan 22 15:46:37 crc kubenswrapper[4825]: I0122 15:46:37.715579 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerStarted","Data":"2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d"} Jan 22 15:46:38 crc kubenswrapper[4825]: I0122 15:46:38.201860 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:38 crc kubenswrapper[4825]: I0122 15:46:38.728715 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerStarted","Data":"ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120"} Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.773280 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerStarted","Data":"296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3"} Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.774134 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-notification-agent" containerID="cri-o://2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d" gracePeriod=30 Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.774148 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-central-agent" containerID="cri-o://1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0" gracePeriod=30 Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.774206 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.774220 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="proxy-httpd" containerID="cri-o://296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3" gracePeriod=30 Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.774365 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="sg-core" containerID="cri-o://ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120" gracePeriod=30 Jan 22 15:46:40 crc kubenswrapper[4825]: I0122 15:46:40.800262 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.605052888 podStartE2EDuration="7.800243788s" podCreationTimestamp="2026-01-22 15:46:33 +0000 UTC" firstStartedPulling="2026-01-22 15:46:34.865333711 +0000 UTC m=+1341.626860621" lastFinishedPulling="2026-01-22 15:46:40.060524611 +0000 UTC m=+1346.822051521" observedRunningTime="2026-01-22 15:46:40.79609147 +0000 UTC m=+1347.557618380" watchObservedRunningTime="2026-01-22 15:46:40.800243788 +0000 UTC m=+1347.561770698" Jan 22 15:46:41 crc kubenswrapper[4825]: I0122 15:46:41.784652 4825 generic.go:334] "Generic (PLEG): container finished" podID="cae271e3-f39d-4512-be27-1568327ce8f0" containerID="296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3" exitCode=0 Jan 22 15:46:41 crc kubenswrapper[4825]: I0122 15:46:41.785009 4825 generic.go:334] "Generic (PLEG): container finished" podID="cae271e3-f39d-4512-be27-1568327ce8f0" containerID="ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120" exitCode=2 Jan 22 15:46:41 crc kubenswrapper[4825]: I0122 15:46:41.784823 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerDied","Data":"296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3"} Jan 22 15:46:41 crc kubenswrapper[4825]: I0122 15:46:41.785061 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerDied","Data":"ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120"} Jan 22 15:46:41 crc kubenswrapper[4825]: I0122 15:46:41.785079 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerDied","Data":"2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d"} Jan 22 15:46:41 crc kubenswrapper[4825]: I0122 15:46:41.785029 4825 generic.go:334] "Generic (PLEG): container finished" podID="cae271e3-f39d-4512-be27-1568327ce8f0" containerID="2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d" exitCode=0 Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.552917 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565349 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-log-httpd\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565468 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-run-httpd\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565736 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8p9z\" (UniqueName: \"kubernetes.io/projected/cae271e3-f39d-4512-be27-1568327ce8f0-kube-api-access-p8p9z\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565812 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-sg-core-conf-yaml\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565877 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-combined-ca-bundle\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565886 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565934 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-config-data\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.565947 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.566011 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-scripts\") pod \"cae271e3-f39d-4512-be27-1568327ce8f0\" (UID: \"cae271e3-f39d-4512-be27-1568327ce8f0\") " Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.567495 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.568515 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae271e3-f39d-4512-be27-1568327ce8f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.574950 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae271e3-f39d-4512-be27-1568327ce8f0-kube-api-access-p8p9z" (OuterVolumeSpecName: "kube-api-access-p8p9z") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "kube-api-access-p8p9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.576719 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-scripts" (OuterVolumeSpecName: "scripts") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.612318 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.670870 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8p9z\" (UniqueName: \"kubernetes.io/projected/cae271e3-f39d-4512-be27-1568327ce8f0-kube-api-access-p8p9z\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.670905 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.670918 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.681826 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.706966 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-config-data" (OuterVolumeSpecName: "config-data") pod "cae271e3-f39d-4512-be27-1568327ce8f0" (UID: "cae271e3-f39d-4512-be27-1568327ce8f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.772949 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.773010 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae271e3-f39d-4512-be27-1568327ce8f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.850151 4825 generic.go:334] "Generic (PLEG): container finished" podID="cae271e3-f39d-4512-be27-1568327ce8f0" containerID="1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0" exitCode=0 Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.850219 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.850262 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerDied","Data":"1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0"} Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.851019 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae271e3-f39d-4512-be27-1568327ce8f0","Type":"ContainerDied","Data":"7155bf55cbab353eed17a398fa67cbdcc1d84fa629baa65a4f346908e9d9854a"} Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.851052 4825 scope.go:117] "RemoveContainer" containerID="296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.878117 4825 scope.go:117] "RemoveContainer" containerID="ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.901936 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.909435 4825 scope.go:117] "RemoveContainer" containerID="2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.921237 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.937566 4825 scope.go:117] "RemoveContainer" containerID="1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.943518 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.944042 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-central-agent" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.944060 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-central-agent" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.944072 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-notification-agent" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.944078 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-notification-agent" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.944115 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="proxy-httpd" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.944129 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="proxy-httpd" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.944144 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="sg-core" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.944150 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="sg-core" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.952541 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-notification-agent" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.952584 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="ceilometer-central-agent" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.952593 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="sg-core" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.952631 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" containerName="proxy-httpd" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.954674 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.954778 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.960393 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.961935 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.976556 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.976714 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-log-httpd\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.976765 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-scripts\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.977113 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4qh\" (UniqueName: \"kubernetes.io/projected/3973fcbe-86c9-4b0e-9f53-de4af29601dc-kube-api-access-7l4qh\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.977162 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-config-data\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.977190 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.977335 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-run-httpd\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.977681 4825 scope.go:117] "RemoveContainer" containerID="296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.978299 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3\": container with ID starting with 296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3 not found: ID does not exist" containerID="296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.978360 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3"} err="failed to get container status \"296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3\": rpc error: code = NotFound desc = could not find container \"296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3\": container with ID starting with 296b289048d42997a8d44da3aab1d63c4cc084e78381ca4ebfc5c39cf9f06fc3 not found: ID does not exist" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.978393 4825 scope.go:117] "RemoveContainer" containerID="ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.978830 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120\": container with ID starting with ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120 not found: ID does not exist" containerID="ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.978867 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120"} err="failed to get container status \"ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120\": rpc error: code = NotFound desc = could not find container \"ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120\": container with ID starting with ebd93d541f755ec0bd42af4fabe098e5423b80629df914070070c89de9315120 not found: ID does not exist" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.978889 4825 scope.go:117] "RemoveContainer" containerID="2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.979180 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d\": container with ID starting with 2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d not found: ID does not exist" containerID="2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.979216 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d"} err="failed to get container status \"2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d\": rpc error: code = NotFound desc = could not find container \"2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d\": container with ID starting with 2aec5773df88bd97fc8ddff44799d844b6935d4757292f36e873e6c6687d439d not found: ID does not exist" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.979235 4825 scope.go:117] "RemoveContainer" containerID="1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0" Jan 22 15:46:46 crc kubenswrapper[4825]: E0122 15:46:46.979519 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0\": container with ID starting with 1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0 not found: ID does not exist" containerID="1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0" Jan 22 15:46:46 crc kubenswrapper[4825]: I0122 15:46:46.979548 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0"} err="failed to get container status \"1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0\": rpc error: code = NotFound desc = could not find container \"1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0\": container with ID starting with 1106fd4ffbaefb4510a6b262abc760aeade5d4df915edb5fd5165c8cf50756e0 not found: ID does not exist" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.078901 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-scripts\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.079094 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l4qh\" (UniqueName: \"kubernetes.io/projected/3973fcbe-86c9-4b0e-9f53-de4af29601dc-kube-api-access-7l4qh\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.079133 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-config-data\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.079166 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.079246 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-run-httpd\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.079311 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.079412 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-log-httpd\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.080250 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-log-httpd\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.081379 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-run-httpd\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.084751 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.085008 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.088047 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-scripts\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.091600 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-config-data\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.100399 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l4qh\" (UniqueName: \"kubernetes.io/projected/3973fcbe-86c9-4b0e-9f53-de4af29601dc-kube-api-access-7l4qh\") pod \"ceilometer-0\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.272670 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.543838 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae271e3-f39d-4512-be27-1568327ce8f0" path="/var/lib/kubelet/pods/cae271e3-f39d-4512-be27-1568327ce8f0/volumes" Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.752071 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:46:47 crc kubenswrapper[4825]: W0122 15:46:47.755595 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3973fcbe_86c9_4b0e_9f53_de4af29601dc.slice/crio-51ca0955a669b7d1833654794285f8dd65e56c1897fa520e57ead6747b70bb75 WatchSource:0}: Error finding container 51ca0955a669b7d1833654794285f8dd65e56c1897fa520e57ead6747b70bb75: Status 404 returned error can't find the container with id 51ca0955a669b7d1833654794285f8dd65e56c1897fa520e57ead6747b70bb75 Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.867233 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerStarted","Data":"51ca0955a669b7d1833654794285f8dd65e56c1897fa520e57ead6747b70bb75"} Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.871458 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" event={"ID":"7a02e958-e76f-4351-bf61-0b0f4ec0e410","Type":"ContainerStarted","Data":"87b6a24946b6b30dde93bac2c3870a45c6830f605f758a59f59dc31ae76ecb92"} Jan 22 15:46:47 crc kubenswrapper[4825]: I0122 15:46:47.890465 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" podStartSLOduration=2.384165379 podStartE2EDuration="31.890432049s" podCreationTimestamp="2026-01-22 15:46:16 +0000 UTC" firstStartedPulling="2026-01-22 15:46:17.605967666 +0000 UTC m=+1324.367494576" lastFinishedPulling="2026-01-22 15:46:47.112234326 +0000 UTC m=+1353.873761246" observedRunningTime="2026-01-22 15:46:47.888472583 +0000 UTC m=+1354.649999503" watchObservedRunningTime="2026-01-22 15:46:47.890432049 +0000 UTC m=+1354.651958949" Jan 22 15:46:48 crc kubenswrapper[4825]: I0122 15:46:48.886526 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerStarted","Data":"b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9"} Jan 22 15:46:50 crc kubenswrapper[4825]: I0122 15:46:50.976334 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerStarted","Data":"35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35"} Jan 22 15:46:51 crc kubenswrapper[4825]: I0122 15:46:51.988487 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerStarted","Data":"64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1"} Jan 22 15:46:54 crc kubenswrapper[4825]: I0122 15:46:54.170466 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerStarted","Data":"7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409"} Jan 22 15:46:54 crc kubenswrapper[4825]: I0122 15:46:54.171097 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:46:54 crc kubenswrapper[4825]: I0122 15:46:54.204892 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.742897815 podStartE2EDuration="8.204865516s" podCreationTimestamp="2026-01-22 15:46:46 +0000 UTC" firstStartedPulling="2026-01-22 15:46:47.758141754 +0000 UTC m=+1354.519668664" lastFinishedPulling="2026-01-22 15:46:53.220109445 +0000 UTC m=+1359.981636365" observedRunningTime="2026-01-22 15:46:54.198160406 +0000 UTC m=+1360.959687346" watchObservedRunningTime="2026-01-22 15:46:54.204865516 +0000 UTC m=+1360.966392426" Jan 22 15:47:03 crc kubenswrapper[4825]: I0122 15:47:03.283765 4825 generic.go:334] "Generic (PLEG): container finished" podID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" containerID="87b6a24946b6b30dde93bac2c3870a45c6830f605f758a59f59dc31ae76ecb92" exitCode=0 Jan 22 15:47:03 crc kubenswrapper[4825]: I0122 15:47:03.283843 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" event={"ID":"7a02e958-e76f-4351-bf61-0b0f4ec0e410","Type":"ContainerDied","Data":"87b6a24946b6b30dde93bac2c3870a45c6830f605f758a59f59dc31ae76ecb92"} Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.823304 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.890297 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-scripts\") pod \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.890340 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-config-data\") pod \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.890574 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-592wt\" (UniqueName: \"kubernetes.io/projected/7a02e958-e76f-4351-bf61-0b0f4ec0e410-kube-api-access-592wt\") pod \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.890593 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-combined-ca-bundle\") pod \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\" (UID: \"7a02e958-e76f-4351-bf61-0b0f4ec0e410\") " Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.896315 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a02e958-e76f-4351-bf61-0b0f4ec0e410-kube-api-access-592wt" (OuterVolumeSpecName: "kube-api-access-592wt") pod "7a02e958-e76f-4351-bf61-0b0f4ec0e410" (UID: "7a02e958-e76f-4351-bf61-0b0f4ec0e410"). InnerVolumeSpecName "kube-api-access-592wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.897182 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-scripts" (OuterVolumeSpecName: "scripts") pod "7a02e958-e76f-4351-bf61-0b0f4ec0e410" (UID: "7a02e958-e76f-4351-bf61-0b0f4ec0e410"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.929594 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-config-data" (OuterVolumeSpecName: "config-data") pod "7a02e958-e76f-4351-bf61-0b0f4ec0e410" (UID: "7a02e958-e76f-4351-bf61-0b0f4ec0e410"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:04 crc kubenswrapper[4825]: I0122 15:47:04.932993 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a02e958-e76f-4351-bf61-0b0f4ec0e410" (UID: "7a02e958-e76f-4351-bf61-0b0f4ec0e410"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.002891 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.003722 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-592wt\" (UniqueName: \"kubernetes.io/projected/7a02e958-e76f-4351-bf61-0b0f4ec0e410-kube-api-access-592wt\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.003743 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.003758 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a02e958-e76f-4351-bf61-0b0f4ec0e410-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.327387 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.327412 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9vs7x" event={"ID":"7a02e958-e76f-4351-bf61-0b0f4ec0e410","Type":"ContainerDied","Data":"bdcdf3860bf17c9469defe9e9a587353a80b28747cea4fcca4b6739420087540"} Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.331326 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdcdf3860bf17c9469defe9e9a587353a80b28747cea4fcca4b6739420087540" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.423684 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 15:47:05 crc kubenswrapper[4825]: E0122 15:47:05.424586 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" containerName="nova-cell0-conductor-db-sync" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.424606 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" containerName="nova-cell0-conductor-db-sync" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.424843 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" containerName="nova-cell0-conductor-db-sync" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.425725 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.433520 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xt7m5" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.434796 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.445023 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.521339 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clpdv\" (UniqueName: \"kubernetes.io/projected/a97addeb-cbff-4929-9a25-e1a5de50a83d-kube-api-access-clpdv\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.521455 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97addeb-cbff-4929-9a25-e1a5de50a83d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.521591 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97addeb-cbff-4929-9a25-e1a5de50a83d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.541328 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.541433 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.624203 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clpdv\" (UniqueName: \"kubernetes.io/projected/a97addeb-cbff-4929-9a25-e1a5de50a83d-kube-api-access-clpdv\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.624437 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97addeb-cbff-4929-9a25-e1a5de50a83d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.624674 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97addeb-cbff-4929-9a25-e1a5de50a83d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.629623 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97addeb-cbff-4929-9a25-e1a5de50a83d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.630200 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97addeb-cbff-4929-9a25-e1a5de50a83d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.640209 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clpdv\" (UniqueName: \"kubernetes.io/projected/a97addeb-cbff-4929-9a25-e1a5de50a83d-kube-api-access-clpdv\") pod \"nova-cell0-conductor-0\" (UID: \"a97addeb-cbff-4929-9a25-e1a5de50a83d\") " pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:05 crc kubenswrapper[4825]: I0122 15:47:05.744052 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:06 crc kubenswrapper[4825]: I0122 15:47:06.215876 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 15:47:06 crc kubenswrapper[4825]: W0122 15:47:06.220264 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda97addeb_cbff_4929_9a25_e1a5de50a83d.slice/crio-900ef373d287aa89b85f00d02242a50c2510fa4b2771a9e7b18631ac5fa8a67e WatchSource:0}: Error finding container 900ef373d287aa89b85f00d02242a50c2510fa4b2771a9e7b18631ac5fa8a67e: Status 404 returned error can't find the container with id 900ef373d287aa89b85f00d02242a50c2510fa4b2771a9e7b18631ac5fa8a67e Jan 22 15:47:06 crc kubenswrapper[4825]: I0122 15:47:06.344271 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a97addeb-cbff-4929-9a25-e1a5de50a83d","Type":"ContainerStarted","Data":"900ef373d287aa89b85f00d02242a50c2510fa4b2771a9e7b18631ac5fa8a67e"} Jan 22 15:47:07 crc kubenswrapper[4825]: I0122 15:47:07.357029 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a97addeb-cbff-4929-9a25-e1a5de50a83d","Type":"ContainerStarted","Data":"a54f2fd0a457ffe5d380aeafb93f46797baf169bac7e29b853837ac2c5e294b3"} Jan 22 15:47:07 crc kubenswrapper[4825]: I0122 15:47:07.357355 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:07 crc kubenswrapper[4825]: I0122 15:47:07.378172 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.378151982 podStartE2EDuration="2.378151982s" podCreationTimestamp="2026-01-22 15:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:07.375563748 +0000 UTC m=+1374.137090678" watchObservedRunningTime="2026-01-22 15:47:07.378151982 +0000 UTC m=+1374.139678892" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.581163 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrqt8"] Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.583900 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.598796 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrqt8"] Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.755562 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-utilities\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.755727 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-catalog-content\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.755780 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7bt\" (UniqueName: \"kubernetes.io/projected/440a9cba-a225-40d8-b171-c15c47d7d223-kube-api-access-st7bt\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.772964 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.858063 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-catalog-content\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.858128 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st7bt\" (UniqueName: \"kubernetes.io/projected/440a9cba-a225-40d8-b171-c15c47d7d223-kube-api-access-st7bt\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.858239 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-utilities\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.858640 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-catalog-content\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.858853 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-utilities\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.879855 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st7bt\" (UniqueName: \"kubernetes.io/projected/440a9cba-a225-40d8-b171-c15c47d7d223-kube-api-access-st7bt\") pod \"redhat-operators-qrqt8\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:15 crc kubenswrapper[4825]: I0122 15:47:15.910768 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.338469 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrqt8"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.394502 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-q7pvf"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.396659 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.399847 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.400089 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.415399 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q7pvf"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.477620 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-scripts\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.478051 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-config-data\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.478201 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.478331 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjn2\" (UniqueName: \"kubernetes.io/projected/1d5eac43-e644-430f-b4c7-8003b6984a30-kube-api-access-mpjn2\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.509832 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerStarted","Data":"2966c25b45074702d7af8a00324986fe5f18f12f089e5dd15690381935a396f9"} Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.583809 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.583891 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjn2\" (UniqueName: \"kubernetes.io/projected/1d5eac43-e644-430f-b4c7-8003b6984a30-kube-api-access-mpjn2\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.583956 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-scripts\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.584104 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-config-data\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.598438 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.600429 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.604768 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-scripts\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.605654 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-config-data\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.605847 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.610410 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.621889 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjn2\" (UniqueName: \"kubernetes.io/projected/1d5eac43-e644-430f-b4c7-8003b6984a30-kube-api-access-mpjn2\") pod \"nova-cell0-cell-mapping-q7pvf\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.665129 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.691537 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkp2\" (UniqueName: \"kubernetes.io/projected/df949131-0f5e-4264-bbf3-a62b57cfb952-kube-api-access-ltkp2\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.691659 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.700462 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-config-data\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.765859 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.768420 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.791489 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.797507 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.805205 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-config-data\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.805304 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkp2\" (UniqueName: \"kubernetes.io/projected/df949131-0f5e-4264-bbf3-a62b57cfb952-kube-api-access-ltkp2\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.805370 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.808287 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.830305 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.832821 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.854375 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-config-data\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.854570 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.854870 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.867306 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkp2\" (UniqueName: \"kubernetes.io/projected/df949131-0f5e-4264-bbf3-a62b57cfb952-kube-api-access-ltkp2\") pod \"nova-scheduler-0\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.875540 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.899522 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-2nzzl"] Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.902349 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.924813 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-config-data\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.924966 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qjz\" (UniqueName: \"kubernetes.io/projected/5bf1ff63-7ee6-4262-95b3-397c96d3649f-kube-api-access-x8qjz\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.926187 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.926286 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8sq8\" (UniqueName: \"kubernetes.io/projected/3cb504e0-4450-4771-943e-1f4ebe7c074e-kube-api-access-w8sq8\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.926316 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-config-data\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.926338 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cb504e0-4450-4771-943e-1f4ebe7c074e-logs\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.926379 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bf1ff63-7ee6-4262-95b3-397c96d3649f-logs\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.926396 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.957531 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:47:16 crc kubenswrapper[4825]: I0122 15:47:16.960522 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-2nzzl"] Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.019753 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036270 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-config-data\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036386 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qjz\" (UniqueName: \"kubernetes.io/projected/5bf1ff63-7ee6-4262-95b3-397c96d3649f-kube-api-access-x8qjz\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036447 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5l7v\" (UniqueName: \"kubernetes.io/projected/56dacd23-6234-4d06-968b-ed6a51d03f70-kube-api-access-p5l7v\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036475 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036539 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8sq8\" (UniqueName: \"kubernetes.io/projected/3cb504e0-4450-4771-943e-1f4ebe7c074e-kube-api-access-w8sq8\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036566 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-config-data\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036600 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cb504e0-4450-4771-943e-1f4ebe7c074e-logs\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036645 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bf1ff63-7ee6-4262-95b3-397c96d3649f-logs\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036676 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036721 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036748 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036803 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-config\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036852 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.036888 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.039617 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.041030 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cb504e0-4450-4771-943e-1f4ebe7c074e-logs\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.042116 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bf1ff63-7ee6-4262-95b3-397c96d3649f-logs\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.053372 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.069730 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-config-data\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.073429 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.075336 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.089798 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-config-data\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.112354 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.125910 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qjz\" (UniqueName: \"kubernetes.io/projected/5bf1ff63-7ee6-4262-95b3-397c96d3649f-kube-api-access-x8qjz\") pod \"nova-metadata-0\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139194 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rb52\" (UniqueName: \"kubernetes.io/projected/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-kube-api-access-6rb52\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139269 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139300 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139327 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139384 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-config\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139426 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139464 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139600 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.139636 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5l7v\" (UniqueName: \"kubernetes.io/projected/56dacd23-6234-4d06-968b-ed6a51d03f70-kube-api-access-p5l7v\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.141380 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.141775 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.142483 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.151459 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.161515 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8sq8\" (UniqueName: \"kubernetes.io/projected/3cb504e0-4450-4771-943e-1f4ebe7c074e-kube-api-access-w8sq8\") pod \"nova-api-0\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.161806 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-config\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.226006 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5l7v\" (UniqueName: \"kubernetes.io/projected/56dacd23-6234-4d06-968b-ed6a51d03f70-kube-api-access-p5l7v\") pod \"dnsmasq-dns-884c8b8f5-2nzzl\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.251417 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.251669 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.251822 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rb52\" (UniqueName: \"kubernetes.io/projected/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-kube-api-access-6rb52\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.295605 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.303969 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.304810 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.314277 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.332476 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rb52\" (UniqueName: \"kubernetes.io/projected/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-kube-api-access-6rb52\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.384371 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.416567 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.425005 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.568887 4825 generic.go:334] "Generic (PLEG): container finished" podID="440a9cba-a225-40d8-b171-c15c47d7d223" containerID="26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db" exitCode=0 Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.572859 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerDied","Data":"26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db"} Jan 22 15:47:17 crc kubenswrapper[4825]: I0122 15:47:17.942165 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q7pvf"] Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.160288 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.470207 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-2nzzl"] Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.672190 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.680810 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df949131-0f5e-4264-bbf3-a62b57cfb952","Type":"ContainerStarted","Data":"e0312f1695ff0046ee28a9bf1cefc49bc48cb95950183668291814392eb9327f"} Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.702213 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q7pvf" event={"ID":"1d5eac43-e644-430f-b4c7-8003b6984a30","Type":"ContainerStarted","Data":"c7263beae0bb6b086aa1746c503d94be1e94c9f226f01f926f7c54fc37b70381"} Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.726696 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.739794 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" event={"ID":"56dacd23-6234-4d06-968b-ed6a51d03f70","Type":"ContainerStarted","Data":"b8fedb6d422eb0691c2d5a57bd485b54a5b13090839b1c44731386e6d7b4adbb"} Jan 22 15:47:18 crc kubenswrapper[4825]: I0122 15:47:18.776634 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:47:18 crc kubenswrapper[4825]: W0122 15:47:18.831929 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf1ff63_7ee6_4262_95b3_397c96d3649f.slice/crio-591fd22ca8157148cf98926a10a7973a954345b65790169f3a2204bdea92f1fa WatchSource:0}: Error finding container 591fd22ca8157148cf98926a10a7973a954345b65790169f3a2204bdea92f1fa: Status 404 returned error can't find the container with id 591fd22ca8157148cf98926a10a7973a954345b65790169f3a2204bdea92f1fa Jan 22 15:47:18 crc kubenswrapper[4825]: W0122 15:47:18.851376 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5f377bb_6d9c_4a59_acd6_e5d1fc6306fc.slice/crio-4e9f77353bd4e07cde314c9d7a284367bfe1998af9bc606f32ea5b86fd9e1370 WatchSource:0}: Error finding container 4e9f77353bd4e07cde314c9d7a284367bfe1998af9bc606f32ea5b86fd9e1370: Status 404 returned error can't find the container with id 4e9f77353bd4e07cde314c9d7a284367bfe1998af9bc606f32ea5b86fd9e1370 Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.064051 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qbh5"] Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.066138 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.081653 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.081953 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.092068 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qbh5"] Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.102510 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-config-data\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.102618 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.102729 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-scripts\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.102772 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7pd\" (UniqueName: \"kubernetes.io/projected/3802a459-6af8-4a3f-8087-529583d75594-kube-api-access-8m7pd\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.205480 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7pd\" (UniqueName: \"kubernetes.io/projected/3802a459-6af8-4a3f-8087-529583d75594-kube-api-access-8m7pd\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.205646 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-config-data\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.205704 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.205788 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-scripts\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.215927 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-config-data\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.227887 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7pd\" (UniqueName: \"kubernetes.io/projected/3802a459-6af8-4a3f-8087-529583d75594-kube-api-access-8m7pd\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.228477 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.236483 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-scripts\") pod \"nova-cell1-conductor-db-sync-8qbh5\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.436275 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.789570 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bf1ff63-7ee6-4262-95b3-397c96d3649f","Type":"ContainerStarted","Data":"591fd22ca8157148cf98926a10a7973a954345b65790169f3a2204bdea92f1fa"} Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.807276 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q7pvf" event={"ID":"1d5eac43-e644-430f-b4c7-8003b6984a30","Type":"ContainerStarted","Data":"76444e0def286411aaecb5e2b7b368677ced5dbdea1b6f1ab77a176368441ccb"} Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.822471 4825 generic.go:334] "Generic (PLEG): container finished" podID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerID="1c202468e52229edf2796b077e04e5ab359d5e4213179ef0e62aa5afe28d40b9" exitCode=0 Jan 22 15:47:19 crc kubenswrapper[4825]: I0122 15:47:19.822610 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" event={"ID":"56dacd23-6234-4d06-968b-ed6a51d03f70","Type":"ContainerDied","Data":"1c202468e52229edf2796b077e04e5ab359d5e4213179ef0e62aa5afe28d40b9"} Jan 22 15:47:20 crc kubenswrapper[4825]: I0122 15:47:20.044248 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerStarted","Data":"06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d"} Jan 22 15:47:20 crc kubenswrapper[4825]: I0122 15:47:20.058214 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc","Type":"ContainerStarted","Data":"4e9f77353bd4e07cde314c9d7a284367bfe1998af9bc606f32ea5b86fd9e1370"} Jan 22 15:47:20 crc kubenswrapper[4825]: I0122 15:47:20.078462 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cb504e0-4450-4771-943e-1f4ebe7c074e","Type":"ContainerStarted","Data":"96098c7a2a57b5d7a952c9ea745f84468bfb8b42f76ae2adc0b981c3c667e34b"} Jan 22 15:47:20 crc kubenswrapper[4825]: I0122 15:47:20.139877 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-q7pvf" podStartSLOduration=4.139857334 podStartE2EDuration="4.139857334s" podCreationTimestamp="2026-01-22 15:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:20.058448308 +0000 UTC m=+1386.819975228" watchObservedRunningTime="2026-01-22 15:47:20.139857334 +0000 UTC m=+1386.901384244" Jan 22 15:47:20 crc kubenswrapper[4825]: I0122 15:47:20.355071 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qbh5"] Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.277946 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" event={"ID":"3802a459-6af8-4a3f-8087-529583d75594","Type":"ContainerStarted","Data":"f8d85efd37ea8dc7ff0830b2241a6253c94fbd1de81fee3b3aff4db16b0a1662"} Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.278242 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" event={"ID":"3802a459-6af8-4a3f-8087-529583d75594","Type":"ContainerStarted","Data":"f6fbefef3a63109e4f6fe9ca5362737444e43fab5ad768b7b131d47666affaac"} Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.293617 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" event={"ID":"56dacd23-6234-4d06-968b-ed6a51d03f70","Type":"ContainerStarted","Data":"fa96f120abd6843a27e0f02e89394736626c3a412c9a65b8a8bf4970d4abc943"} Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.293678 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.336785 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" podStartSLOduration=5.336297349 podStartE2EDuration="5.336297349s" podCreationTimestamp="2026-01-22 15:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:21.323349481 +0000 UTC m=+1388.084876391" watchObservedRunningTime="2026-01-22 15:47:21.336297349 +0000 UTC m=+1388.097824259" Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.811610 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:21 crc kubenswrapper[4825]: I0122 15:47:21.827571 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:47:22 crc kubenswrapper[4825]: I0122 15:47:22.332826 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" podStartSLOduration=3.332800735 podStartE2EDuration="3.332800735s" podCreationTimestamp="2026-01-22 15:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:22.328367569 +0000 UTC m=+1389.089894499" watchObservedRunningTime="2026-01-22 15:47:22.332800735 +0000 UTC m=+1389.094327645" Jan 22 15:47:26 crc kubenswrapper[4825]: I0122 15:47:26.359346 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:47:26 crc kubenswrapper[4825]: I0122 15:47:26.360209 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="684234f5-b409-42a4-9494-52a0565b000c" containerName="kube-state-metrics" containerID="cri-o://84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f" gracePeriod=30 Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.196436 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.390121 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.470930 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df949131-0f5e-4264-bbf3-a62b57cfb952","Type":"ContainerStarted","Data":"c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.476748 4825 generic.go:334] "Generic (PLEG): container finished" podID="684234f5-b409-42a4-9494-52a0565b000c" containerID="84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f" exitCode=2 Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.476813 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"684234f5-b409-42a4-9494-52a0565b000c","Type":"ContainerDied","Data":"84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.476839 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"684234f5-b409-42a4-9494-52a0565b000c","Type":"ContainerDied","Data":"db9b086a67871c23179e281a76ddb166dfe97552eb519629d032371bae48a844"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.476859 4825 scope.go:117] "RemoveContainer" containerID="84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.477027 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.485502 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-hkzt2"] Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.485827 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="dnsmasq-dns" containerID="cri-o://533415d684514fcc5cb6e9b617c61baf7a74ce45888b4e89ca677e52b85e84cc" gracePeriod=10 Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.490295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx7mg\" (UniqueName: \"kubernetes.io/projected/684234f5-b409-42a4-9494-52a0565b000c-kube-api-access-wx7mg\") pod \"684234f5-b409-42a4-9494-52a0565b000c\" (UID: \"684234f5-b409-42a4-9494-52a0565b000c\") " Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.508886 4825 generic.go:334] "Generic (PLEG): container finished" podID="440a9cba-a225-40d8-b171-c15c47d7d223" containerID="06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d" exitCode=0 Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.509070 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerDied","Data":"06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.511767 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc","Type":"ContainerStarted","Data":"cc901ab508fbce2c4f479010e2ca5d9207e071ccb4186a6875e127ac1c224744"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.511771 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://cc901ab508fbce2c4f479010e2ca5d9207e071ccb4186a6875e127ac1c224744" gracePeriod=30 Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.513820 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cb504e0-4450-4771-943e-1f4ebe7c074e","Type":"ContainerStarted","Data":"4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.516444 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.779081809 podStartE2EDuration="11.516420264s" podCreationTimestamp="2026-01-22 15:47:16 +0000 UTC" firstStartedPulling="2026-01-22 15:47:18.171064233 +0000 UTC m=+1384.932591153" lastFinishedPulling="2026-01-22 15:47:25.908402698 +0000 UTC m=+1392.669929608" observedRunningTime="2026-01-22 15:47:27.490949399 +0000 UTC m=+1394.252476309" watchObservedRunningTime="2026-01-22 15:47:27.516420264 +0000 UTC m=+1394.277947184" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.537349 4825 scope.go:117] "RemoveContainer" containerID="84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.538402 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684234f5-b409-42a4-9494-52a0565b000c-kube-api-access-wx7mg" (OuterVolumeSpecName: "kube-api-access-wx7mg") pod "684234f5-b409-42a4-9494-52a0565b000c" (UID: "684234f5-b409-42a4-9494-52a0565b000c"). InnerVolumeSpecName "kube-api-access-wx7mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:27 crc kubenswrapper[4825]: E0122 15:47:27.538831 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f\": container with ID starting with 84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f not found: ID does not exist" containerID="84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.538868 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f"} err="failed to get container status \"84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f\": rpc error: code = NotFound desc = could not find container \"84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f\": container with ID starting with 84db6b30d9b984e4a5f1e5a2a34f8c4475bebdaedc73657684547ae402504e6f not found: ID does not exist" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.544415 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bf1ff63-7ee6-4262-95b3-397c96d3649f","Type":"ContainerStarted","Data":"6c6920faf30aa97ffaed50449e54ed26f14a17393c0e5d048a3ec9443032eb26"} Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.595528 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx7mg\" (UniqueName: \"kubernetes.io/projected/684234f5-b409-42a4-9494-52a0565b000c-kube-api-access-wx7mg\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.606329 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.553426422 podStartE2EDuration="11.606306731s" podCreationTimestamp="2026-01-22 15:47:16 +0000 UTC" firstStartedPulling="2026-01-22 15:47:18.85624071 +0000 UTC m=+1385.617767620" lastFinishedPulling="2026-01-22 15:47:25.909121019 +0000 UTC m=+1392.670647929" observedRunningTime="2026-01-22 15:47:27.572991193 +0000 UTC m=+1394.334518103" watchObservedRunningTime="2026-01-22 15:47:27.606306731 +0000 UTC m=+1394.367833631" Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.903874 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:47:27 crc kubenswrapper[4825]: I0122 15:47:27.984435 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.013415 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:47:28 crc kubenswrapper[4825]: E0122 15:47:28.014041 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684234f5-b409-42a4-9494-52a0565b000c" containerName="kube-state-metrics" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.014059 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="684234f5-b409-42a4-9494-52a0565b000c" containerName="kube-state-metrics" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.014302 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="684234f5-b409-42a4-9494-52a0565b000c" containerName="kube-state-metrics" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.015316 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.028579 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.035951 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.036190 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.178036 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.178475 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.178509 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phfzm\" (UniqueName: \"kubernetes.io/projected/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-api-access-phfzm\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.178529 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.280040 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.280191 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.280228 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phfzm\" (UniqueName: \"kubernetes.io/projected/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-api-access-phfzm\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.280247 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.293772 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.298057 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.311784 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phfzm\" (UniqueName: \"kubernetes.io/projected/a64e8f75-44a3-495b-bd22-94db8fd34687-kube-api-access-phfzm\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.316161 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e8f75-44a3-495b-bd22-94db8fd34687-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a64e8f75-44a3-495b-bd22-94db8fd34687\") " pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.642879 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.835699 4825 generic.go:334] "Generic (PLEG): container finished" podID="91f09962-57c2-42b0-9077-05b26c5899b3" containerID="533415d684514fcc5cb6e9b617c61baf7a74ce45888b4e89ca677e52b85e84cc" exitCode=0 Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.835817 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" event={"ID":"91f09962-57c2-42b0-9077-05b26c5899b3","Type":"ContainerDied","Data":"533415d684514fcc5cb6e9b617c61baf7a74ce45888b4e89ca677e52b85e84cc"} Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.835990 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" event={"ID":"91f09962-57c2-42b0-9077-05b26c5899b3","Type":"ContainerDied","Data":"fb756ac4a08a79f0945fd2539ededf217dc3cde36c3d6a58518acc1ead70857e"} Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.836006 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb756ac4a08a79f0945fd2539ededf217dc3cde36c3d6a58518acc1ead70857e" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.863776 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.864891 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bf1ff63-7ee6-4262-95b3-397c96d3649f","Type":"ContainerStarted","Data":"52cebc9aebcc9e1caa1b07483a6afc3508bdc10163d61807b01422ff6a68fea4"} Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.865045 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-log" containerID="cri-o://6c6920faf30aa97ffaed50449e54ed26f14a17393c0e5d048a3ec9443032eb26" gracePeriod=30 Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.865287 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-metadata" containerID="cri-o://52cebc9aebcc9e1caa1b07483a6afc3508bdc10163d61807b01422ff6a68fea4" gracePeriod=30 Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.962277 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-svc\") pod \"91f09962-57c2-42b0-9077-05b26c5899b3\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.962323 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfpn5\" (UniqueName: \"kubernetes.io/projected/91f09962-57c2-42b0-9077-05b26c5899b3-kube-api-access-kfpn5\") pod \"91f09962-57c2-42b0-9077-05b26c5899b3\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.962358 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-config\") pod \"91f09962-57c2-42b0-9077-05b26c5899b3\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.962389 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-swift-storage-0\") pod \"91f09962-57c2-42b0-9077-05b26c5899b3\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.962420 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-sb\") pod \"91f09962-57c2-42b0-9077-05b26c5899b3\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.962476 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-nb\") pod \"91f09962-57c2-42b0-9077-05b26c5899b3\" (UID: \"91f09962-57c2-42b0-9077-05b26c5899b3\") " Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.963233 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cb504e0-4450-4771-943e-1f4ebe7c074e","Type":"ContainerStarted","Data":"64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6"} Jan 22 15:47:28 crc kubenswrapper[4825]: I0122 15:47:28.968470 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.889772579 podStartE2EDuration="12.968441911s" podCreationTimestamp="2026-01-22 15:47:16 +0000 UTC" firstStartedPulling="2026-01-22 15:47:18.842569021 +0000 UTC m=+1385.604095931" lastFinishedPulling="2026-01-22 15:47:25.921238353 +0000 UTC m=+1392.682765263" observedRunningTime="2026-01-22 15:47:28.964350495 +0000 UTC m=+1395.725877415" watchObservedRunningTime="2026-01-22 15:47:28.968441911 +0000 UTC m=+1395.729968831" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.117058 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f09962-57c2-42b0-9077-05b26c5899b3-kube-api-access-kfpn5" (OuterVolumeSpecName: "kube-api-access-kfpn5") pod "91f09962-57c2-42b0-9077-05b26c5899b3" (UID: "91f09962-57c2-42b0-9077-05b26c5899b3"). InnerVolumeSpecName "kube-api-access-kfpn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.130666 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.975584241 podStartE2EDuration="13.130644437s" podCreationTimestamp="2026-01-22 15:47:16 +0000 UTC" firstStartedPulling="2026-01-22 15:47:18.753517157 +0000 UTC m=+1385.515044067" lastFinishedPulling="2026-01-22 15:47:25.908577353 +0000 UTC m=+1392.670104263" observedRunningTime="2026-01-22 15:47:29.129830354 +0000 UTC m=+1395.891357264" watchObservedRunningTime="2026-01-22 15:47:29.130644437 +0000 UTC m=+1395.892171347" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.184403 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91f09962-57c2-42b0-9077-05b26c5899b3" (UID: "91f09962-57c2-42b0-9077-05b26c5899b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.203863 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.203916 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfpn5\" (UniqueName: \"kubernetes.io/projected/91f09962-57c2-42b0-9077-05b26c5899b3-kube-api-access-kfpn5\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.266557 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-config" (OuterVolumeSpecName: "config") pod "91f09962-57c2-42b0-9077-05b26c5899b3" (UID: "91f09962-57c2-42b0-9077-05b26c5899b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.303334 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "91f09962-57c2-42b0-9077-05b26c5899b3" (UID: "91f09962-57c2-42b0-9077-05b26c5899b3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.317516 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.317553 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.414684 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91f09962-57c2-42b0-9077-05b26c5899b3" (UID: "91f09962-57c2-42b0-9077-05b26c5899b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.432576 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.435545 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91f09962-57c2-42b0-9077-05b26c5899b3" (UID: "91f09962-57c2-42b0-9077-05b26c5899b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.529911 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684234f5-b409-42a4-9494-52a0565b000c" path="/var/lib/kubelet/pods/684234f5-b409-42a4-9494-52a0565b000c/volumes" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.904470 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f09962-57c2-42b0-9077-05b26c5899b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:29 crc kubenswrapper[4825]: I0122 15:47:29.985829 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 15:47:30 crc kubenswrapper[4825]: W0122 15:47:30.017897 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda64e8f75_44a3_495b_bd22_94db8fd34687.slice/crio-68d020d6b2a4e446339eb6377a5641f7c10f6c50decedb023ccae5d3f3b56323 WatchSource:0}: Error finding container 68d020d6b2a4e446339eb6377a5641f7c10f6c50decedb023ccae5d3f3b56323: Status 404 returned error can't find the container with id 68d020d6b2a4e446339eb6377a5641f7c10f6c50decedb023ccae5d3f3b56323 Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.045048 4825 generic.go:334] "Generic (PLEG): container finished" podID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerID="6c6920faf30aa97ffaed50449e54ed26f14a17393c0e5d048a3ec9443032eb26" exitCode=143 Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.045146 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bf1ff63-7ee6-4262-95b3-397c96d3649f","Type":"ContainerDied","Data":"6c6920faf30aa97ffaed50449e54ed26f14a17393c0e5d048a3ec9443032eb26"} Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.061991 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.062884 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerStarted","Data":"9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba"} Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.103697 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrqt8" podStartSLOduration=4.614699413 podStartE2EDuration="15.103675145s" podCreationTimestamp="2026-01-22 15:47:15 +0000 UTC" firstStartedPulling="2026-01-22 15:47:17.623787901 +0000 UTC m=+1384.385314811" lastFinishedPulling="2026-01-22 15:47:28.112763633 +0000 UTC m=+1394.874290543" observedRunningTime="2026-01-22 15:47:30.089428639 +0000 UTC m=+1396.850955549" watchObservedRunningTime="2026-01-22 15:47:30.103675145 +0000 UTC m=+1396.865202045" Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.130080 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-hkzt2"] Jan 22 15:47:30 crc kubenswrapper[4825]: I0122 15:47:30.146917 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-hkzt2"] Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.073793 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a64e8f75-44a3-495b-bd22-94db8fd34687","Type":"ContainerStarted","Data":"68d020d6b2a4e446339eb6377a5641f7c10f6c50decedb023ccae5d3f3b56323"} Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.531116 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" path="/var/lib/kubelet/pods/91f09962-57c2-42b0-9077-05b26c5899b3/volumes" Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.566602 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.567008 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-central-agent" containerID="cri-o://b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9" gracePeriod=30 Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.567083 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="proxy-httpd" containerID="cri-o://7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409" gracePeriod=30 Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.567087 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-notification-agent" containerID="cri-o://35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35" gracePeriod=30 Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.567083 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="sg-core" containerID="cri-o://64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1" gracePeriod=30 Jan 22 15:47:31 crc kubenswrapper[4825]: I0122 15:47:31.959345 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.094436 4825 generic.go:334] "Generic (PLEG): container finished" podID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerID="52cebc9aebcc9e1caa1b07483a6afc3508bdc10163d61807b01422ff6a68fea4" exitCode=0 Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.094542 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bf1ff63-7ee6-4262-95b3-397c96d3649f","Type":"ContainerDied","Data":"52cebc9aebcc9e1caa1b07483a6afc3508bdc10163d61807b01422ff6a68fea4"} Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.105033 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerDied","Data":"7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409"} Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.104991 4825 generic.go:334] "Generic (PLEG): container finished" podID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerID="7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409" exitCode=0 Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.105719 4825 generic.go:334] "Generic (PLEG): container finished" podID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerID="64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1" exitCode=2 Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.105746 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerDied","Data":"64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1"} Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.416901 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.416959 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:47:32 crc kubenswrapper[4825]: I0122 15:47:32.426262 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.107606 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58bd69657f-hkzt2" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: i/o timeout" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.142284 4825 generic.go:334] "Generic (PLEG): container finished" podID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerID="b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9" exitCode=0 Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.142327 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerDied","Data":"b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9"} Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.302018 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.476073 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-config-data\") pod \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.476196 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bf1ff63-7ee6-4262-95b3-397c96d3649f-logs\") pod \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.476248 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-combined-ca-bundle\") pod \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.476369 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8qjz\" (UniqueName: \"kubernetes.io/projected/5bf1ff63-7ee6-4262-95b3-397c96d3649f-kube-api-access-x8qjz\") pod \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\" (UID: \"5bf1ff63-7ee6-4262-95b3-397c96d3649f\") " Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.476559 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf1ff63-7ee6-4262-95b3-397c96d3649f-logs" (OuterVolumeSpecName: "logs") pod "5bf1ff63-7ee6-4262-95b3-397c96d3649f" (UID: "5bf1ff63-7ee6-4262-95b3-397c96d3649f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.477585 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bf1ff63-7ee6-4262-95b3-397c96d3649f-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.482101 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf1ff63-7ee6-4262-95b3-397c96d3649f-kube-api-access-x8qjz" (OuterVolumeSpecName: "kube-api-access-x8qjz") pod "5bf1ff63-7ee6-4262-95b3-397c96d3649f" (UID: "5bf1ff63-7ee6-4262-95b3-397c96d3649f"). InnerVolumeSpecName "kube-api-access-x8qjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.511813 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-config-data" (OuterVolumeSpecName: "config-data") pod "5bf1ff63-7ee6-4262-95b3-397c96d3649f" (UID: "5bf1ff63-7ee6-4262-95b3-397c96d3649f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.518070 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bf1ff63-7ee6-4262-95b3-397c96d3649f" (UID: "5bf1ff63-7ee6-4262-95b3-397c96d3649f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.579325 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8qjz\" (UniqueName: \"kubernetes.io/projected/5bf1ff63-7ee6-4262-95b3-397c96d3649f-kube-api-access-x8qjz\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.579358 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:33 crc kubenswrapper[4825]: I0122 15:47:33.579451 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf1ff63-7ee6-4262-95b3-397c96d3649f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.156319 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a64e8f75-44a3-495b-bd22-94db8fd34687","Type":"ContainerStarted","Data":"186137f0e99e3d0c7bec25ee4623985d77828bc908272bcbd851032c587061d5"} Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.156693 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.159645 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bf1ff63-7ee6-4262-95b3-397c96d3649f","Type":"ContainerDied","Data":"591fd22ca8157148cf98926a10a7973a954345b65790169f3a2204bdea92f1fa"} Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.159704 4825 scope.go:117] "RemoveContainer" containerID="52cebc9aebcc9e1caa1b07483a6afc3508bdc10163d61807b01422ff6a68fea4" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.159871 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.194567 4825 scope.go:117] "RemoveContainer" containerID="6c6920faf30aa97ffaed50449e54ed26f14a17393c0e5d048a3ec9443032eb26" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.212901 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.393274844 podStartE2EDuration="7.212881471s" podCreationTimestamp="2026-01-22 15:47:27 +0000 UTC" firstStartedPulling="2026-01-22 15:47:30.025634914 +0000 UTC m=+1396.787161824" lastFinishedPulling="2026-01-22 15:47:33.845241511 +0000 UTC m=+1400.606768451" observedRunningTime="2026-01-22 15:47:34.19036258 +0000 UTC m=+1400.951889510" watchObservedRunningTime="2026-01-22 15:47:34.212881471 +0000 UTC m=+1400.974408381" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.223940 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.238181 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.253004 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:34 crc kubenswrapper[4825]: E0122 15:47:34.253823 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="dnsmasq-dns" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.253942 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="dnsmasq-dns" Jan 22 15:47:34 crc kubenswrapper[4825]: E0122 15:47:34.254042 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-metadata" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.254125 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-metadata" Jan 22 15:47:34 crc kubenswrapper[4825]: E0122 15:47:34.254229 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="init" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.254297 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="init" Jan 22 15:47:34 crc kubenswrapper[4825]: E0122 15:47:34.254376 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-log" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.254462 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-log" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.254819 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f09962-57c2-42b0-9077-05b26c5899b3" containerName="dnsmasq-dns" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.254924 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-log" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.255394 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" containerName="nova-metadata-metadata" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.256791 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.260485 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.262153 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.274749 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.335935 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.336070 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cblkh\" (UniqueName: \"kubernetes.io/projected/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-kube-api-access-cblkh\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.336359 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-config-data\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.336597 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.336674 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-logs\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.437729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-logs\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.437820 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.437861 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cblkh\" (UniqueName: \"kubernetes.io/projected/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-kube-api-access-cblkh\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.437892 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-config-data\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.437989 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.439806 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-logs\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.442199 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.442686 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-config-data\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.447610 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:34 crc kubenswrapper[4825]: I0122 15:47:34.455308 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cblkh\" (UniqueName: \"kubernetes.io/projected/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-kube-api-access-cblkh\") pod \"nova-metadata-0\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " pod="openstack/nova-metadata-0" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:34.565176 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:34.948837 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353007 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-scripts\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353081 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-log-httpd\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353150 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-run-httpd\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353275 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l4qh\" (UniqueName: \"kubernetes.io/projected/3973fcbe-86c9-4b0e-9f53-de4af29601dc-kube-api-access-7l4qh\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353339 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-config-data\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353452 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-sg-core-conf-yaml\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.353533 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-combined-ca-bundle\") pod \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\" (UID: \"3973fcbe-86c9-4b0e-9f53-de4af29601dc\") " Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.357570 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.359638 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.380636 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-scripts" (OuterVolumeSpecName: "scripts") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.398815 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3973fcbe-86c9-4b0e-9f53-de4af29601dc-kube-api-access-7l4qh" (OuterVolumeSpecName: "kube-api-access-7l4qh") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "kube-api-access-7l4qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.414496 4825 generic.go:334] "Generic (PLEG): container finished" podID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerID="35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35" exitCode=0 Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.414623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerDied","Data":"35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35"} Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.414684 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3973fcbe-86c9-4b0e-9f53-de4af29601dc","Type":"ContainerDied","Data":"51ca0955a669b7d1833654794285f8dd65e56c1897fa520e57ead6747b70bb75"} Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.414698 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.414723 4825 scope.go:117] "RemoveContainer" containerID="7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.444722 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.459528 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.459618 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.459630 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3973fcbe-86c9-4b0e-9f53-de4af29601dc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.459639 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l4qh\" (UniqueName: \"kubernetes.io/projected/3973fcbe-86c9-4b0e-9f53-de4af29601dc-kube-api-access-7l4qh\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.459652 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.526553 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.541165 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.541206 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.542442 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf1ff63-7ee6-4262-95b3-397c96d3649f" path="/var/lib/kubelet/pods/5bf1ff63-7ee6-4262-95b3-397c96d3649f/volumes" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.544146 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-config-data" (OuterVolumeSpecName: "config-data") pod "3973fcbe-86c9-4b0e-9f53-de4af29601dc" (UID: "3973fcbe-86c9-4b0e-9f53-de4af29601dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.561644 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.561677 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3973fcbe-86c9-4b0e-9f53-de4af29601dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.795342 4825 scope.go:117] "RemoveContainer" containerID="64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.803215 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.821215 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.858333 4825 scope.go:117] "RemoveContainer" containerID="35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.859547 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.860021 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="sg-core" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860033 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="sg-core" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.860062 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="proxy-httpd" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860068 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="proxy-httpd" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.860085 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-central-agent" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860091 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-central-agent" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.860103 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-notification-agent" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860109 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-notification-agent" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860335 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-notification-agent" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860345 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="sg-core" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860364 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="proxy-httpd" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.860375 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" containerName="ceilometer-central-agent" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.862496 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.868208 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.869720 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.871159 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.889522 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.909017 4825 scope.go:117] "RemoveContainer" containerID="b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.911308 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.911378 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.970155 4825 scope.go:117] "RemoveContainer" containerID="7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.970973 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409\": container with ID starting with 7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409 not found: ID does not exist" containerID="7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.971032 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409"} err="failed to get container status \"7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409\": rpc error: code = NotFound desc = could not find container \"7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409\": container with ID starting with 7f836f8334c2fd2f8a4b1ee6a7c15cbec9b7cff19d39aa5d9ce19ee2245f7409 not found: ID does not exist" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.971059 4825 scope.go:117] "RemoveContainer" containerID="64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.971518 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1\": container with ID starting with 64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1 not found: ID does not exist" containerID="64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.971537 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1"} err="failed to get container status \"64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1\": rpc error: code = NotFound desc = could not find container \"64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1\": container with ID starting with 64f794a848af2ca67e3e90e4c7a15972a0e39a453392c06bbb6682f121a0bbc1 not found: ID does not exist" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.971551 4825 scope.go:117] "RemoveContainer" containerID="35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.971939 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35\": container with ID starting with 35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35 not found: ID does not exist" containerID="35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.971961 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35"} err="failed to get container status \"35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35\": rpc error: code = NotFound desc = could not find container \"35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35\": container with ID starting with 35f3d629e833aba6a9ac4eb6a2a96a7d0ad424fa84ace589ba3a80ee50b16c35 not found: ID does not exist" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.971996 4825 scope.go:117] "RemoveContainer" containerID="b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9" Jan 22 15:47:35 crc kubenswrapper[4825]: E0122 15:47:35.972234 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9\": container with ID starting with b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9 not found: ID does not exist" containerID="b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9" Jan 22 15:47:35 crc kubenswrapper[4825]: I0122 15:47:35.972254 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9"} err="failed to get container status \"b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9\": rpc error: code = NotFound desc = could not find container \"b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9\": container with ID starting with b7c613ffcb08040ca7fdc3824c14e88d4c881e2306df9c4655f700bf84f743e9 not found: ID does not exist" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064699 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-scripts\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064765 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064797 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-config-data\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064836 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-run-httpd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064857 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064878 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-log-httpd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.064949 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.065012 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp4qd\" (UniqueName: \"kubernetes.io/projected/6a3f8382-fadc-4144-83c0-b378ebcbdef1-kube-api-access-sp4qd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.167738 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.167826 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp4qd\" (UniqueName: \"kubernetes.io/projected/6a3f8382-fadc-4144-83c0-b378ebcbdef1-kube-api-access-sp4qd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.167923 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-scripts\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.167952 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.167972 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-config-data\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.168019 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-run-httpd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.168049 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.168072 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-log-httpd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.169021 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-run-httpd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.170413 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-log-httpd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.173703 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.173900 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-scripts\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.174252 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-config-data\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.174536 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.177604 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.188942 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp4qd\" (UniqueName: \"kubernetes.io/projected/6a3f8382-fadc-4144-83c0-b378ebcbdef1-kube-api-access-sp4qd\") pod \"ceilometer-0\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: W0122 15:47:36.363173 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e3cf9b5_85d3_4137_8be1_8b4a865f974c.slice/crio-5c9ad956d8d506550d7fcf16624400622b0e2990b292e0a9f99ee7d8bbbef007 WatchSource:0}: Error finding container 5c9ad956d8d506550d7fcf16624400622b0e2990b292e0a9f99ee7d8bbbef007: Status 404 returned error can't find the container with id 5c9ad956d8d506550d7fcf16624400622b0e2990b292e0a9f99ee7d8bbbef007 Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.365296 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.593824 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.608362 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d5eac43-e644-430f-b4c7-8003b6984a30" containerID="76444e0def286411aaecb5e2b7b368677ced5dbdea1b6f1ab77a176368441ccb" exitCode=0 Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.608444 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q7pvf" event={"ID":"1d5eac43-e644-430f-b4c7-8003b6984a30","Type":"ContainerDied","Data":"76444e0def286411aaecb5e2b7b368677ced5dbdea1b6f1ab77a176368441ccb"} Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.619501 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3cf9b5-85d3-4137-8be1-8b4a865f974c","Type":"ContainerStarted","Data":"5c9ad956d8d506550d7fcf16624400622b0e2990b292e0a9f99ee7d8bbbef007"} Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.959658 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.969073 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrqt8" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="registry-server" probeResult="failure" output=< Jan 22 15:47:36 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:47:36 crc kubenswrapper[4825]: > Jan 22 15:47:36 crc kubenswrapper[4825]: I0122 15:47:36.996451 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 15:47:37 crc kubenswrapper[4825]: I0122 15:47:37.334481 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 15:47:37 crc kubenswrapper[4825]: I0122 15:47:37.337325 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 15:47:37 crc kubenswrapper[4825]: I0122 15:47:37.401388 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:47:37 crc kubenswrapper[4825]: I0122 15:47:37.532152 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3973fcbe-86c9-4b0e-9f53-de4af29601dc" path="/var/lib/kubelet/pods/3973fcbe-86c9-4b0e-9f53-de4af29601dc/volumes" Jan 22 15:47:37 crc kubenswrapper[4825]: I0122 15:47:37.993324 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerStarted","Data":"2d41b50a394fc28c379b023243761898021710803b380c4827ecb9439358eca6"} Jan 22 15:47:38 crc kubenswrapper[4825]: I0122 15:47:38.005358 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3cf9b5-85d3-4137-8be1-8b4a865f974c","Type":"ContainerStarted","Data":"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2"} Jan 22 15:47:38 crc kubenswrapper[4825]: I0122 15:47:38.005409 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3cf9b5-85d3-4137-8be1-8b4a865f974c","Type":"ContainerStarted","Data":"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b"} Jan 22 15:47:38 crc kubenswrapper[4825]: I0122 15:47:38.034926 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.034900167 podStartE2EDuration="4.034900167s" podCreationTimestamp="2026-01-22 15:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:38.027031473 +0000 UTC m=+1404.788558383" watchObservedRunningTime="2026-01-22 15:47:38.034900167 +0000 UTC m=+1404.796427077" Jan 22 15:47:38 crc kubenswrapper[4825]: I0122 15:47:38.044524 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 15:47:38 crc kubenswrapper[4825]: I0122 15:47:38.532946 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:47:38 crc kubenswrapper[4825]: I0122 15:47:38.534405 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.109936 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q7pvf" event={"ID":"1d5eac43-e644-430f-b4c7-8003b6984a30","Type":"ContainerDied","Data":"c7263beae0bb6b086aa1746c503d94be1e94c9f226f01f926f7c54fc37b70381"} Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.110297 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7263beae0bb6b086aa1746c503d94be1e94c9f226f01f926f7c54fc37b70381" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.160359 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.318515 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpjn2\" (UniqueName: \"kubernetes.io/projected/1d5eac43-e644-430f-b4c7-8003b6984a30-kube-api-access-mpjn2\") pod \"1d5eac43-e644-430f-b4c7-8003b6984a30\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.319651 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-combined-ca-bundle\") pod \"1d5eac43-e644-430f-b4c7-8003b6984a30\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.319708 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-config-data\") pod \"1d5eac43-e644-430f-b4c7-8003b6984a30\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.319743 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-scripts\") pod \"1d5eac43-e644-430f-b4c7-8003b6984a30\" (UID: \"1d5eac43-e644-430f-b4c7-8003b6984a30\") " Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.328191 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-scripts" (OuterVolumeSpecName: "scripts") pod "1d5eac43-e644-430f-b4c7-8003b6984a30" (UID: "1d5eac43-e644-430f-b4c7-8003b6984a30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.328431 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5eac43-e644-430f-b4c7-8003b6984a30-kube-api-access-mpjn2" (OuterVolumeSpecName: "kube-api-access-mpjn2") pod "1d5eac43-e644-430f-b4c7-8003b6984a30" (UID: "1d5eac43-e644-430f-b4c7-8003b6984a30"). InnerVolumeSpecName "kube-api-access-mpjn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.617464 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.617492 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpjn2\" (UniqueName: \"kubernetes.io/projected/1d5eac43-e644-430f-b4c7-8003b6984a30-kube-api-access-mpjn2\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.628942 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-config-data" (OuterVolumeSpecName: "config-data") pod "1d5eac43-e644-430f-b4c7-8003b6984a30" (UID: "1d5eac43-e644-430f-b4c7-8003b6984a30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.657833 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d5eac43-e644-430f-b4c7-8003b6984a30" (UID: "1d5eac43-e644-430f-b4c7-8003b6984a30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.722739 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.722784 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5eac43-e644-430f-b4c7-8003b6984a30-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.746102 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:47:39 crc kubenswrapper[4825]: I0122 15:47:39.746141 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.393456 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerStarted","Data":"3fa16d745a52b0d8eb1a53955ddaa029a42cd29fbf9a43a3b74b0af248944a2f"} Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.393641 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q7pvf" Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.638861 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.639422 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-log" containerID="cri-o://4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5" gracePeriod=30 Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.640066 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-api" containerID="cri-o://64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6" gracePeriod=30 Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.660689 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.660914 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="df949131-0f5e-4264-bbf3-a62b57cfb952" containerName="nova-scheduler-scheduler" containerID="cri-o://c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7" gracePeriod=30 Jan 22 15:47:40 crc kubenswrapper[4825]: I0122 15:47:40.695102 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:41 crc kubenswrapper[4825]: I0122 15:47:41.416925 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerStarted","Data":"48130e468c1a7dfa8c09d2ac329da1b70f1154de3ed153b322dd826ae28cdbaa"} Jan 22 15:47:41 crc kubenswrapper[4825]: I0122 15:47:41.417189 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerStarted","Data":"b98948de4b2e236681339901453f88a3ca638a7983f78903659121bf53181953"} Jan 22 15:47:41 crc kubenswrapper[4825]: I0122 15:47:41.420262 4825 generic.go:334] "Generic (PLEG): container finished" podID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerID="4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5" exitCode=143 Jan 22 15:47:41 crc kubenswrapper[4825]: I0122 15:47:41.420341 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cb504e0-4450-4771-943e-1f4ebe7c074e","Type":"ContainerDied","Data":"4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5"} Jan 22 15:47:41 crc kubenswrapper[4825]: I0122 15:47:41.420524 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-log" containerID="cri-o://724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b" gracePeriod=30 Jan 22 15:47:41 crc kubenswrapper[4825]: I0122 15:47:41.420567 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-metadata" containerID="cri-o://40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2" gracePeriod=30 Jan 22 15:47:41 crc kubenswrapper[4825]: E0122 15:47:41.961626 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 15:47:41 crc kubenswrapper[4825]: E0122 15:47:41.963644 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 15:47:41 crc kubenswrapper[4825]: E0122 15:47:41.965201 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 15:47:41 crc kubenswrapper[4825]: E0122 15:47:41.965268 4825 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="df949131-0f5e-4264-bbf3-a62b57cfb952" containerName="nova-scheduler-scheduler" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.336045 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.396742 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-nova-metadata-tls-certs\") pod \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.396816 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-logs\") pod \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.397063 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-combined-ca-bundle\") pod \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.397107 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-config-data\") pod \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.397325 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-logs" (OuterVolumeSpecName: "logs") pod "2e3cf9b5-85d3-4137-8be1-8b4a865f974c" (UID: "2e3cf9b5-85d3-4137-8be1-8b4a865f974c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.397492 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cblkh\" (UniqueName: \"kubernetes.io/projected/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-kube-api-access-cblkh\") pod \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\" (UID: \"2e3cf9b5-85d3-4137-8be1-8b4a865f974c\") " Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.398197 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.420784 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-kube-api-access-cblkh" (OuterVolumeSpecName: "kube-api-access-cblkh") pod "2e3cf9b5-85d3-4137-8be1-8b4a865f974c" (UID: "2e3cf9b5-85d3-4137-8be1-8b4a865f974c"). InnerVolumeSpecName "kube-api-access-cblkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.432610 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-config-data" (OuterVolumeSpecName: "config-data") pod "2e3cf9b5-85d3-4137-8be1-8b4a865f974c" (UID: "2e3cf9b5-85d3-4137-8be1-8b4a865f974c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.434330 4825 generic.go:334] "Generic (PLEG): container finished" podID="3802a459-6af8-4a3f-8087-529583d75594" containerID="f8d85efd37ea8dc7ff0830b2241a6253c94fbd1de81fee3b3aff4db16b0a1662" exitCode=0 Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.434408 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" event={"ID":"3802a459-6af8-4a3f-8087-529583d75594","Type":"ContainerDied","Data":"f8d85efd37ea8dc7ff0830b2241a6253c94fbd1de81fee3b3aff4db16b0a1662"} Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440234 4825 generic.go:334] "Generic (PLEG): container finished" podID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerID="40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2" exitCode=0 Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440270 4825 generic.go:334] "Generic (PLEG): container finished" podID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerID="724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b" exitCode=143 Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440302 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3cf9b5-85d3-4137-8be1-8b4a865f974c","Type":"ContainerDied","Data":"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2"} Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440336 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3cf9b5-85d3-4137-8be1-8b4a865f974c","Type":"ContainerDied","Data":"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b"} Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440349 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3cf9b5-85d3-4137-8be1-8b4a865f974c","Type":"ContainerDied","Data":"5c9ad956d8d506550d7fcf16624400622b0e2990b292e0a9f99ee7d8bbbef007"} Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440372 4825 scope.go:117] "RemoveContainer" containerID="40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.440582 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.470565 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e3cf9b5-85d3-4137-8be1-8b4a865f974c" (UID: "2e3cf9b5-85d3-4137-8be1-8b4a865f974c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.479129 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2e3cf9b5-85d3-4137-8be1-8b4a865f974c" (UID: "2e3cf9b5-85d3-4137-8be1-8b4a865f974c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.500690 4825 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.500740 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.500761 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.500775 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cblkh\" (UniqueName: \"kubernetes.io/projected/2e3cf9b5-85d3-4137-8be1-8b4a865f974c-kube-api-access-cblkh\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.506549 4825 scope.go:117] "RemoveContainer" containerID="724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.537052 4825 scope.go:117] "RemoveContainer" containerID="40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2" Jan 22 15:47:42 crc kubenswrapper[4825]: E0122 15:47:42.537587 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2\": container with ID starting with 40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2 not found: ID does not exist" containerID="40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.537678 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2"} err="failed to get container status \"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2\": rpc error: code = NotFound desc = could not find container \"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2\": container with ID starting with 40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2 not found: ID does not exist" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.537743 4825 scope.go:117] "RemoveContainer" containerID="724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b" Jan 22 15:47:42 crc kubenswrapper[4825]: E0122 15:47:42.538292 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b\": container with ID starting with 724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b not found: ID does not exist" containerID="724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.538339 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b"} err="failed to get container status \"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b\": rpc error: code = NotFound desc = could not find container \"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b\": container with ID starting with 724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b not found: ID does not exist" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.538371 4825 scope.go:117] "RemoveContainer" containerID="40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.538655 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2"} err="failed to get container status \"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2\": rpc error: code = NotFound desc = could not find container \"40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2\": container with ID starting with 40a8f028d6b3504c50202ee6e49258598561dd7c9cbf938b1358cf3e6ccb83b2 not found: ID does not exist" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.538729 4825 scope.go:117] "RemoveContainer" containerID="724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b" Jan 22 15:47:42 crc kubenswrapper[4825]: I0122 15:47:42.539066 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b"} err="failed to get container status \"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b\": rpc error: code = NotFound desc = could not find container \"724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b\": container with ID starting with 724458ccee11874f2c37a882fb5cc8dbc1492c99b7ae0135f68680012403094b not found: ID does not exist" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.063958 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:43 crc kubenswrapper[4825]: E0122 15:47:43.115653 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e3cf9b5_85d3_4137_8be1_8b4a865f974c.slice\": RecentStats: unable to find data in memory cache]" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.120066 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.142194 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:43 crc kubenswrapper[4825]: E0122 15:47:43.142893 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-log" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.142917 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-log" Jan 22 15:47:43 crc kubenswrapper[4825]: E0122 15:47:43.142929 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-metadata" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.142938 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-metadata" Jan 22 15:47:43 crc kubenswrapper[4825]: E0122 15:47:43.142956 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5eac43-e644-430f-b4c7-8003b6984a30" containerName="nova-manage" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.142964 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5eac43-e644-430f-b4c7-8003b6984a30" containerName="nova-manage" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.143332 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-metadata" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.143361 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" containerName="nova-metadata-log" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.143378 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5eac43-e644-430f-b4c7-8003b6984a30" containerName="nova-manage" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.145116 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.153420 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.153684 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.162438 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.249174 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.249393 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqbv\" (UniqueName: \"kubernetes.io/projected/952055c6-1b43-4621-9fd9-4078d8539301-kube-api-access-7qqbv\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.249544 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952055c6-1b43-4621-9fd9-4078d8539301-logs\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.249573 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-config-data\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.249643 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.396278 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qqbv\" (UniqueName: \"kubernetes.io/projected/952055c6-1b43-4621-9fd9-4078d8539301-kube-api-access-7qqbv\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.396436 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952055c6-1b43-4621-9fd9-4078d8539301-logs\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.396501 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-config-data\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.396568 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.396731 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.426477 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952055c6-1b43-4621-9fd9-4078d8539301-logs\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.433802 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.434014 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.436850 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-config-data\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.444300 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qqbv\" (UniqueName: \"kubernetes.io/projected/952055c6-1b43-4621-9fd9-4078d8539301-kube-api-access-7qqbv\") pod \"nova-metadata-0\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.472172 4825 generic.go:334] "Generic (PLEG): container finished" podID="df949131-0f5e-4264-bbf3-a62b57cfb952" containerID="c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7" exitCode=0 Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.472590 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df949131-0f5e-4264-bbf3-a62b57cfb952","Type":"ContainerDied","Data":"c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7"} Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.472781 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:47:43 crc kubenswrapper[4825]: I0122 15:47:43.557875 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3cf9b5-85d3-4137-8be1-8b4a865f974c" path="/var/lib/kubelet/pods/2e3cf9b5-85d3-4137-8be1-8b4a865f974c/volumes" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.173094 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.343783 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkp2\" (UniqueName: \"kubernetes.io/projected/df949131-0f5e-4264-bbf3-a62b57cfb952-kube-api-access-ltkp2\") pod \"df949131-0f5e-4264-bbf3-a62b57cfb952\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.344310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-config-data\") pod \"df949131-0f5e-4264-bbf3-a62b57cfb952\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.344334 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-combined-ca-bundle\") pod \"df949131-0f5e-4264-bbf3-a62b57cfb952\" (UID: \"df949131-0f5e-4264-bbf3-a62b57cfb952\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.353091 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df949131-0f5e-4264-bbf3-a62b57cfb952-kube-api-access-ltkp2" (OuterVolumeSpecName: "kube-api-access-ltkp2") pod "df949131-0f5e-4264-bbf3-a62b57cfb952" (UID: "df949131-0f5e-4264-bbf3-a62b57cfb952"). InnerVolumeSpecName "kube-api-access-ltkp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: W0122 15:47:44.355512 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod952055c6_1b43_4621_9fd9_4078d8539301.slice/crio-1f1eb43d2ab4ca02222e10ba5277307f6ca6eeb75b8e423b5efcb8ed7a86aae8 WatchSource:0}: Error finding container 1f1eb43d2ab4ca02222e10ba5277307f6ca6eeb75b8e423b5efcb8ed7a86aae8: Status 404 returned error can't find the container with id 1f1eb43d2ab4ca02222e10ba5277307f6ca6eeb75b8e423b5efcb8ed7a86aae8 Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.358466 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.375858 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.395711 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-config-data" (OuterVolumeSpecName: "config-data") pod "df949131-0f5e-4264-bbf3-a62b57cfb952" (UID: "df949131-0f5e-4264-bbf3-a62b57cfb952"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.426885 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df949131-0f5e-4264-bbf3-a62b57cfb952" (UID: "df949131-0f5e-4264-bbf3-a62b57cfb952"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.447254 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkp2\" (UniqueName: \"kubernetes.io/projected/df949131-0f5e-4264-bbf3-a62b57cfb952-kube-api-access-ltkp2\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.447293 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.447303 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df949131-0f5e-4264-bbf3-a62b57cfb952-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.488339 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952055c6-1b43-4621-9fd9-4078d8539301","Type":"ContainerStarted","Data":"1f1eb43d2ab4ca02222e10ba5277307f6ca6eeb75b8e423b5efcb8ed7a86aae8"} Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.490845 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.495688 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df949131-0f5e-4264-bbf3-a62b57cfb952","Type":"ContainerDied","Data":"e0312f1695ff0046ee28a9bf1cefc49bc48cb95950183668291814392eb9327f"} Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.495758 4825 scope.go:117] "RemoveContainer" containerID="c18c983be3c8183e56ce6db0da5407ebf37285663e8e4af9a49a5d9d9d8461d7" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.503188 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" event={"ID":"3802a459-6af8-4a3f-8087-529583d75594","Type":"ContainerDied","Data":"f6fbefef3a63109e4f6fe9ca5362737444e43fab5ad768b7b131d47666affaac"} Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.503210 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8qbh5" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.503231 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6fbefef3a63109e4f6fe9ca5362737444e43fab5ad768b7b131d47666affaac" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.507554 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerStarted","Data":"bf43f0dca51ca20d83504cc9529552d828900b4ea91b8a943862c37653c70a99"} Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.508588 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.550210 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m7pd\" (UniqueName: \"kubernetes.io/projected/3802a459-6af8-4a3f-8087-529583d75594-kube-api-access-8m7pd\") pod \"3802a459-6af8-4a3f-8087-529583d75594\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.550280 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-combined-ca-bundle\") pod \"3802a459-6af8-4a3f-8087-529583d75594\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.550450 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-config-data\") pod \"3802a459-6af8-4a3f-8087-529583d75594\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.550555 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-scripts\") pod \"3802a459-6af8-4a3f-8087-529583d75594\" (UID: \"3802a459-6af8-4a3f-8087-529583d75594\") " Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.561212 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: E0122 15:47:44.563677 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3802a459-6af8-4a3f-8087-529583d75594" containerName="nova-cell1-conductor-db-sync" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.563709 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3802a459-6af8-4a3f-8087-529583d75594" containerName="nova-cell1-conductor-db-sync" Jan 22 15:47:44 crc kubenswrapper[4825]: E0122 15:47:44.563770 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df949131-0f5e-4264-bbf3-a62b57cfb952" containerName="nova-scheduler-scheduler" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.563781 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="df949131-0f5e-4264-bbf3-a62b57cfb952" containerName="nova-scheduler-scheduler" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.564271 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="df949131-0f5e-4264-bbf3-a62b57cfb952" containerName="nova-scheduler-scheduler" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.564297 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3802a459-6af8-4a3f-8087-529583d75594" containerName="nova-cell1-conductor-db-sync" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.568285 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.570240 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.962133453 podStartE2EDuration="9.57021193s" podCreationTimestamp="2026-01-22 15:47:35 +0000 UTC" firstStartedPulling="2026-01-22 15:47:37.40802223 +0000 UTC m=+1404.169549140" lastFinishedPulling="2026-01-22 15:47:43.016100707 +0000 UTC m=+1409.777627617" observedRunningTime="2026-01-22 15:47:44.537479428 +0000 UTC m=+1411.299006338" watchObservedRunningTime="2026-01-22 15:47:44.57021193 +0000 UTC m=+1411.331738840" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.592181 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3802a459-6af8-4a3f-8087-529583d75594-kube-api-access-8m7pd" (OuterVolumeSpecName: "kube-api-access-8m7pd") pod "3802a459-6af8-4a3f-8087-529583d75594" (UID: "3802a459-6af8-4a3f-8087-529583d75594"). InnerVolumeSpecName "kube-api-access-8m7pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.593099 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-scripts" (OuterVolumeSpecName: "scripts") pod "3802a459-6af8-4a3f-8087-529583d75594" (UID: "3802a459-6af8-4a3f-8087-529583d75594"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.600718 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.616317 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.629053 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3802a459-6af8-4a3f-8087-529583d75594" (UID: "3802a459-6af8-4a3f-8087-529583d75594"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.630147 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.653277 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.653453 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.653598 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9s2\" (UniqueName: \"kubernetes.io/projected/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-kube-api-access-lv9s2\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.653676 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.653693 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m7pd\" (UniqueName: \"kubernetes.io/projected/3802a459-6af8-4a3f-8087-529583d75594-kube-api-access-8m7pd\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.653711 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.657465 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-config-data" (OuterVolumeSpecName: "config-data") pod "3802a459-6af8-4a3f-8087-529583d75594" (UID: "3802a459-6af8-4a3f-8087-529583d75594"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.670632 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.672557 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.677084 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.687223 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.755768 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv9s2\" (UniqueName: \"kubernetes.io/projected/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-kube-api-access-lv9s2\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.755881 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-config-data\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.755928 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.756047 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.756136 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrpdg\" (UniqueName: \"kubernetes.io/projected/2a3aa787-741e-4ea0-968b-bd87cf38efc5-kube-api-access-rrpdg\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.756159 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.976255 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:44 crc kubenswrapper[4825]: I0122 15:47:44.976347 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3802a459-6af8-4a3f-8087-529583d75594-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.028538 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.029684 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv9s2\" (UniqueName: \"kubernetes.io/projected/9fadfc09-61e5-4bda-b09d-2bd3d609dffb-kube-api-access-lv9s2\") pod \"nova-cell1-conductor-0\" (UID: \"9fadfc09-61e5-4bda-b09d-2bd3d609dffb\") " pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.078046 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-config-data\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.078261 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrpdg\" (UniqueName: \"kubernetes.io/projected/2a3aa787-741e-4ea0-968b-bd87cf38efc5-kube-api-access-rrpdg\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.078289 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.084383 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-config-data\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.085239 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.096799 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrpdg\" (UniqueName: \"kubernetes.io/projected/2a3aa787-741e-4ea0-968b-bd87cf38efc5-kube-api-access-rrpdg\") pod \"nova-scheduler-0\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.179134 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.216292 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.906420 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df949131-0f5e-4264-bbf3-a62b57cfb952" path="/var/lib/kubelet/pods/df949131-0f5e-4264-bbf3-a62b57cfb952/volumes" Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.943108 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952055c6-1b43-4621-9fd9-4078d8539301","Type":"ContainerStarted","Data":"dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92"} Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.943166 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952055c6-1b43-4621-9fd9-4078d8539301","Type":"ContainerStarted","Data":"db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f"} Jan 22 15:47:45 crc kubenswrapper[4825]: I0122 15:47:45.966767 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.966748597 podStartE2EDuration="2.966748597s" podCreationTimestamp="2026-01-22 15:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:45.966457989 +0000 UTC m=+1412.727984909" watchObservedRunningTime="2026-01-22 15:47:45.966748597 +0000 UTC m=+1412.728275507" Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.513703 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:47:46 crc kubenswrapper[4825]: W0122 15:47:46.514550 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a3aa787_741e_4ea0_968b_bd87cf38efc5.slice/crio-44afc6c65e0598ed466e634b5774e26c6be81e8632e8974aa8dd9b85ab6d7ae7 WatchSource:0}: Error finding container 44afc6c65e0598ed466e634b5774e26c6be81e8632e8974aa8dd9b85ab6d7ae7: Status 404 returned error can't find the container with id 44afc6c65e0598ed466e634b5774e26c6be81e8632e8974aa8dd9b85ab6d7ae7 Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.529908 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.988068 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9fadfc09-61e5-4bda-b09d-2bd3d609dffb","Type":"ContainerStarted","Data":"e9da58bb9458e5714f8a8d44b52cec3111ed8b8626ec17f90a5d89bb3f3c22d3"} Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.988392 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.988404 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9fadfc09-61e5-4bda-b09d-2bd3d609dffb","Type":"ContainerStarted","Data":"85b83890258ca2e278f5fe3019c33bcd0ea37424eeded705ec097765d7038f23"} Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.990104 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a3aa787-741e-4ea0-968b-bd87cf38efc5","Type":"ContainerStarted","Data":"02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908"} Jan 22 15:47:46 crc kubenswrapper[4825]: I0122 15:47:46.990160 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a3aa787-741e-4ea0-968b-bd87cf38efc5","Type":"ContainerStarted","Data":"44afc6c65e0598ed466e634b5774e26c6be81e8632e8974aa8dd9b85ab6d7ae7"} Jan 22 15:47:47 crc kubenswrapper[4825]: I0122 15:47:47.004357 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrqt8" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="registry-server" probeResult="failure" output=< Jan 22 15:47:47 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:47:47 crc kubenswrapper[4825]: > Jan 22 15:47:47 crc kubenswrapper[4825]: I0122 15:47:47.013342 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.013319788 podStartE2EDuration="3.013319788s" podCreationTimestamp="2026-01-22 15:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:47.008061378 +0000 UTC m=+1413.769588288" watchObservedRunningTime="2026-01-22 15:47:47.013319788 +0000 UTC m=+1413.774846698" Jan 22 15:47:47 crc kubenswrapper[4825]: I0122 15:47:47.033101 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.0330782 podStartE2EDuration="3.0330782s" podCreationTimestamp="2026-01-22 15:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:47.024685341 +0000 UTC m=+1413.786212251" watchObservedRunningTime="2026-01-22 15:47:47.0330782 +0000 UTC m=+1413.794605110" Jan 22 15:47:47 crc kubenswrapper[4825]: I0122 15:47:47.306093 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 15:47:47 crc kubenswrapper[4825]: I0122 15:47:47.306142 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 15:47:47 crc kubenswrapper[4825]: I0122 15:47:47.928555 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.004953 4825 generic.go:334] "Generic (PLEG): container finished" podID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerID="64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6" exitCode=0 Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.005211 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cb504e0-4450-4771-943e-1f4ebe7c074e","Type":"ContainerDied","Data":"64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6"} Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.005254 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cb504e0-4450-4771-943e-1f4ebe7c074e","Type":"ContainerDied","Data":"96098c7a2a57b5d7a952c9ea745f84468bfb8b42f76ae2adc0b981c3c667e34b"} Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.005275 4825 scope.go:117] "RemoveContainer" containerID="64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.005280 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.032216 4825 scope.go:117] "RemoveContainer" containerID="4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.054394 4825 scope.go:117] "RemoveContainer" containerID="64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6" Jan 22 15:47:48 crc kubenswrapper[4825]: E0122 15:47:48.055193 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6\": container with ID starting with 64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6 not found: ID does not exist" containerID="64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.055235 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6"} err="failed to get container status \"64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6\": rpc error: code = NotFound desc = could not find container \"64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6\": container with ID starting with 64c6dc84ab0a5b41785a7e0cb6a913c8ae5b9d890b5ace34a31b513f807cbea6 not found: ID does not exist" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.055262 4825 scope.go:117] "RemoveContainer" containerID="4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5" Jan 22 15:47:48 crc kubenswrapper[4825]: E0122 15:47:48.055506 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5\": container with ID starting with 4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5 not found: ID does not exist" containerID="4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.055585 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5"} err="failed to get container status \"4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5\": rpc error: code = NotFound desc = could not find container \"4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5\": container with ID starting with 4a5a4778dc4e5d84b2b122dc02977731a80395ebe450ef0eed7671fffb256ac5 not found: ID does not exist" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.081921 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cb504e0-4450-4771-943e-1f4ebe7c074e-logs\") pod \"3cb504e0-4450-4771-943e-1f4ebe7c074e\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.082039 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-combined-ca-bundle\") pod \"3cb504e0-4450-4771-943e-1f4ebe7c074e\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.082184 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8sq8\" (UniqueName: \"kubernetes.io/projected/3cb504e0-4450-4771-943e-1f4ebe7c074e-kube-api-access-w8sq8\") pod \"3cb504e0-4450-4771-943e-1f4ebe7c074e\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.082241 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-config-data\") pod \"3cb504e0-4450-4771-943e-1f4ebe7c074e\" (UID: \"3cb504e0-4450-4771-943e-1f4ebe7c074e\") " Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.082588 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb504e0-4450-4771-943e-1f4ebe7c074e-logs" (OuterVolumeSpecName: "logs") pod "3cb504e0-4450-4771-943e-1f4ebe7c074e" (UID: "3cb504e0-4450-4771-943e-1f4ebe7c074e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.083958 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cb504e0-4450-4771-943e-1f4ebe7c074e-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.090224 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb504e0-4450-4771-943e-1f4ebe7c074e-kube-api-access-w8sq8" (OuterVolumeSpecName: "kube-api-access-w8sq8") pod "3cb504e0-4450-4771-943e-1f4ebe7c074e" (UID: "3cb504e0-4450-4771-943e-1f4ebe7c074e"). InnerVolumeSpecName "kube-api-access-w8sq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.117458 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cb504e0-4450-4771-943e-1f4ebe7c074e" (UID: "3cb504e0-4450-4771-943e-1f4ebe7c074e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.121908 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-config-data" (OuterVolumeSpecName: "config-data") pod "3cb504e0-4450-4771-943e-1f4ebe7c074e" (UID: "3cb504e0-4450-4771-943e-1f4ebe7c074e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.185875 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8sq8\" (UniqueName: \"kubernetes.io/projected/3cb504e0-4450-4771-943e-1f4ebe7c074e-kube-api-access-w8sq8\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.185908 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.185920 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb504e0-4450-4771-943e-1f4ebe7c074e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.348015 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.360690 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.379612 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:48 crc kubenswrapper[4825]: E0122 15:47:48.380972 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-api" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.381126 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-api" Jan 22 15:47:48 crc kubenswrapper[4825]: E0122 15:47:48.381269 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-log" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.381340 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-log" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.382099 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-log" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.382213 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" containerName="nova-api-api" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.385039 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.391498 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.402537 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.474122 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.474191 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.497468 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzd5\" (UniqueName: \"kubernetes.io/projected/1bc67821-b410-4ee3-b701-dec227fc8c56-kube-api-access-rlzd5\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.497921 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc67821-b410-4ee3-b701-dec227fc8c56-logs\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.498123 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-config-data\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.498452 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.601372 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlzd5\" (UniqueName: \"kubernetes.io/projected/1bc67821-b410-4ee3-b701-dec227fc8c56-kube-api-access-rlzd5\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.601583 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc67821-b410-4ee3-b701-dec227fc8c56-logs\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.601659 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-config-data\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.601898 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.602695 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc67821-b410-4ee3-b701-dec227fc8c56-logs\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.610787 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.626622 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlzd5\" (UniqueName: \"kubernetes.io/projected/1bc67821-b410-4ee3-b701-dec227fc8c56-kube-api-access-rlzd5\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.630912 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-config-data\") pod \"nova-api-0\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " pod="openstack/nova-api-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.673650 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 15:47:48 crc kubenswrapper[4825]: I0122 15:47:48.736914 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:47:49 crc kubenswrapper[4825]: I0122 15:47:49.267696 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:47:49 crc kubenswrapper[4825]: I0122 15:47:49.535549 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb504e0-4450-4771-943e-1f4ebe7c074e" path="/var/lib/kubelet/pods/3cb504e0-4450-4771-943e-1f4ebe7c074e/volumes" Jan 22 15:47:50 crc kubenswrapper[4825]: I0122 15:47:50.038549 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc67821-b410-4ee3-b701-dec227fc8c56","Type":"ContainerStarted","Data":"4d84713498475e4c8851078ff630b76c551d177d9c3403b329c2a80e8061cf79"} Jan 22 15:47:50 crc kubenswrapper[4825]: I0122 15:47:50.038865 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc67821-b410-4ee3-b701-dec227fc8c56","Type":"ContainerStarted","Data":"26c368ff9baa77c02dfe169bf835f9e4199f9735e16dd07d11a321f2142888ce"} Jan 22 15:47:50 crc kubenswrapper[4825]: I0122 15:47:50.038877 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc67821-b410-4ee3-b701-dec227fc8c56","Type":"ContainerStarted","Data":"fae71e0f2f8b3cc82cca3d3e1e33d9d651e228e78c402c78539d762e900685af"} Jan 22 15:47:50 crc kubenswrapper[4825]: I0122 15:47:50.179668 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 15:47:53 crc kubenswrapper[4825]: I0122 15:47:53.474166 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 15:47:53 crc kubenswrapper[4825]: I0122 15:47:53.474799 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 15:47:54 crc kubenswrapper[4825]: I0122 15:47:54.486269 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:47:54 crc kubenswrapper[4825]: I0122 15:47:54.486303 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:47:55 crc kubenswrapper[4825]: I0122 15:47:55.179709 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 15:47:55 crc kubenswrapper[4825]: I0122 15:47:55.220295 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 15:47:55 crc kubenswrapper[4825]: I0122 15:47:55.248017 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=7.247996354 podStartE2EDuration="7.247996354s" podCreationTimestamp="2026-01-22 15:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:47:50.064500149 +0000 UTC m=+1416.826027069" watchObservedRunningTime="2026-01-22 15:47:55.247996354 +0000 UTC m=+1422.009523264" Jan 22 15:47:55 crc kubenswrapper[4825]: I0122 15:47:55.257344 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 15:47:55 crc kubenswrapper[4825]: I0122 15:47:55.987852 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:56 crc kubenswrapper[4825]: I0122 15:47:56.210753 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:56 crc kubenswrapper[4825]: I0122 15:47:56.284660 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 15:47:56 crc kubenswrapper[4825]: I0122 15:47:56.290942 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrqt8"] Jan 22 15:47:57 crc kubenswrapper[4825]: I0122 15:47:57.314526 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrqt8" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="registry-server" containerID="cri-o://9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba" gracePeriod=2 Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.334722 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.349803 4825 generic.go:334] "Generic (PLEG): container finished" podID="440a9cba-a225-40d8-b171-c15c47d7d223" containerID="9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba" exitCode=0 Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.350047 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerDied","Data":"9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba"} Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.350112 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrqt8" event={"ID":"440a9cba-a225-40d8-b171-c15c47d7d223","Type":"ContainerDied","Data":"2966c25b45074702d7af8a00324986fe5f18f12f089e5dd15690381935a396f9"} Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.350132 4825 scope.go:117] "RemoveContainer" containerID="9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.356735 4825 generic.go:334] "Generic (PLEG): container finished" podID="a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" containerID="cc901ab508fbce2c4f479010e2ca5d9207e071ccb4186a6875e127ac1c224744" exitCode=137 Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.356795 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc","Type":"ContainerDied","Data":"cc901ab508fbce2c4f479010e2ca5d9207e071ccb4186a6875e127ac1c224744"} Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.370910 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-catalog-content\") pod \"440a9cba-a225-40d8-b171-c15c47d7d223\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.371170 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-utilities\") pod \"440a9cba-a225-40d8-b171-c15c47d7d223\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.371209 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st7bt\" (UniqueName: \"kubernetes.io/projected/440a9cba-a225-40d8-b171-c15c47d7d223-kube-api-access-st7bt\") pod \"440a9cba-a225-40d8-b171-c15c47d7d223\" (UID: \"440a9cba-a225-40d8-b171-c15c47d7d223\") " Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.378708 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440a9cba-a225-40d8-b171-c15c47d7d223-kube-api-access-st7bt" (OuterVolumeSpecName: "kube-api-access-st7bt") pod "440a9cba-a225-40d8-b171-c15c47d7d223" (UID: "440a9cba-a225-40d8-b171-c15c47d7d223"). InnerVolumeSpecName "kube-api-access-st7bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.379194 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-utilities" (OuterVolumeSpecName: "utilities") pod "440a9cba-a225-40d8-b171-c15c47d7d223" (UID: "440a9cba-a225-40d8-b171-c15c47d7d223"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.420230 4825 scope.go:117] "RemoveContainer" containerID="06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.472040 4825 scope.go:117] "RemoveContainer" containerID="26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.473852 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.473889 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st7bt\" (UniqueName: \"kubernetes.io/projected/440a9cba-a225-40d8-b171-c15c47d7d223-kube-api-access-st7bt\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.497605 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "440a9cba-a225-40d8-b171-c15c47d7d223" (UID: "440a9cba-a225-40d8-b171-c15c47d7d223"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.731295 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440a9cba-a225-40d8-b171-c15c47d7d223-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.739998 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.740037 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.779220 4825 scope.go:117] "RemoveContainer" containerID="9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba" Jan 22 15:47:58 crc kubenswrapper[4825]: E0122 15:47:58.794020 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba\": container with ID starting with 9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba not found: ID does not exist" containerID="9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.794090 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba"} err="failed to get container status \"9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba\": rpc error: code = NotFound desc = could not find container \"9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba\": container with ID starting with 9dbfb12199e63ee4f3b1a305e51036c89e31d6ddcdbe76b4dcd77939a4382aba not found: ID does not exist" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.794142 4825 scope.go:117] "RemoveContainer" containerID="06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d" Jan 22 15:47:58 crc kubenswrapper[4825]: E0122 15:47:58.795874 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d\": container with ID starting with 06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d not found: ID does not exist" containerID="06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.795924 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d"} err="failed to get container status \"06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d\": rpc error: code = NotFound desc = could not find container \"06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d\": container with ID starting with 06d1c0de023ea77b2a5c10f56136ecdc5f2af3e5637480e5175d60c43245fc8d not found: ID does not exist" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.795955 4825 scope.go:117] "RemoveContainer" containerID="26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db" Jan 22 15:47:58 crc kubenswrapper[4825]: E0122 15:47:58.802482 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db\": container with ID starting with 26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db not found: ID does not exist" containerID="26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.802532 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db"} err="failed to get container status \"26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db\": rpc error: code = NotFound desc = could not find container \"26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db\": container with ID starting with 26b6493a41a2935c0251c9efc0c32c3eaa5e58793987c28aead832799f0c03db not found: ID does not exist" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.803488 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.934714 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-config-data\") pod \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.934927 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-combined-ca-bundle\") pod \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.935065 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rb52\" (UniqueName: \"kubernetes.io/projected/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-kube-api-access-6rb52\") pod \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\" (UID: \"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc\") " Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.941333 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-kube-api-access-6rb52" (OuterVolumeSpecName: "kube-api-access-6rb52") pod "a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" (UID: "a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc"). InnerVolumeSpecName "kube-api-access-6rb52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.969260 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-config-data" (OuterVolumeSpecName: "config-data") pod "a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" (UID: "a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:58 crc kubenswrapper[4825]: I0122 15:47:58.971178 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" (UID: "a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.038179 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.038235 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rb52\" (UniqueName: \"kubernetes.io/projected/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-kube-api-access-6rb52\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.038255 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.528147 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrqt8" Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.548015 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.564368 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc","Type":"ContainerDied","Data":"4e9f77353bd4e07cde314c9d7a284367bfe1998af9bc606f32ea5b86fd9e1370"} Jan 22 15:47:59 crc kubenswrapper[4825]: I0122 15:47:59.564416 4825 scope.go:117] "RemoveContainer" containerID="cc901ab508fbce2c4f479010e2ca5d9207e071ccb4186a6875e127ac1c224744" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.037629 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.082422 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.649522 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.712731 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.735787 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrqt8"] Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.747723 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:48:00 crc kubenswrapper[4825]: E0122 15:48:00.748304 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="registry-server" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.748325 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="registry-server" Jan 22 15:48:00 crc kubenswrapper[4825]: E0122 15:48:00.748374 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="extract-content" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.748383 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="extract-content" Jan 22 15:48:00 crc kubenswrapper[4825]: E0122 15:48:00.748399 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="extract-utilities" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.748406 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="extract-utilities" Jan 22 15:48:00 crc kubenswrapper[4825]: E0122 15:48:00.748419 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.748425 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.748644 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.748679 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" containerName="registry-server" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.749452 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.754611 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.754871 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.755023 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 15:48:00 crc kubenswrapper[4825]: I0122 15:48:00.782467 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.068418 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrqt8"] Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.163050 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.163112 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.163146 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.163202 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.163285 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-kube-api-access-p6qrq\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.265364 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.265421 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.265477 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.265560 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-kube-api-access-p6qrq\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.265715 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.272934 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.276627 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.277058 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.288906 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.297615 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/bd2d3808-2332-4ae9-b3c0-2e58ba48437c-kube-api-access-p6qrq\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd2d3808-2332-4ae9-b3c0-2e58ba48437c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.387728 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.916853 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440a9cba-a225-40d8-b171-c15c47d7d223" path="/var/lib/kubelet/pods/440a9cba-a225-40d8-b171-c15c47d7d223/volumes" Jan 22 15:48:01 crc kubenswrapper[4825]: I0122 15:48:01.917728 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc" path="/var/lib/kubelet/pods/a5f377bb-6d9c-4a59-acd6-e5d1fc6306fc/volumes" Jan 22 15:48:03 crc kubenswrapper[4825]: I0122 15:48:03.042155 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 15:48:03 crc kubenswrapper[4825]: I0122 15:48:03.067862 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd2d3808-2332-4ae9-b3c0-2e58ba48437c","Type":"ContainerStarted","Data":"d125f4e8db24389bc44588a5cd469c44ae3dd86248be0b4af5cbd5856d9fdb0a"} Jan 22 15:48:03 crc kubenswrapper[4825]: I0122 15:48:03.486119 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 15:48:03 crc kubenswrapper[4825]: I0122 15:48:03.491821 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 15:48:03 crc kubenswrapper[4825]: I0122 15:48:03.809889 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 15:48:04 crc kubenswrapper[4825]: I0122 15:48:04.087630 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd2d3808-2332-4ae9-b3c0-2e58ba48437c","Type":"ContainerStarted","Data":"4ca216a639be687b6c4a4e59c6a19d16487d17bbd4db08225c70dd72f237b41b"} Jan 22 15:48:04 crc kubenswrapper[4825]: I0122 15:48:04.097018 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 15:48:04 crc kubenswrapper[4825]: I0122 15:48:04.119134 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.119110673 podStartE2EDuration="4.119110673s" podCreationTimestamp="2026-01-22 15:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:04.114443912 +0000 UTC m=+1430.875970832" watchObservedRunningTime="2026-01-22 15:48:04.119110673 +0000 UTC m=+1430.880637583" Jan 22 15:48:05 crc kubenswrapper[4825]: I0122 15:48:05.541313 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:48:05 crc kubenswrapper[4825]: I0122 15:48:05.541646 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:48:05 crc kubenswrapper[4825]: I0122 15:48:05.541705 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:48:05 crc kubenswrapper[4825]: I0122 15:48:05.542706 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f117d8aef866860d54f3d492ab55e9d654f82ddf841344db75dba9d26403f13"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:48:05 crc kubenswrapper[4825]: I0122 15:48:05.542784 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://4f117d8aef866860d54f3d492ab55e9d654f82ddf841344db75dba9d26403f13" gracePeriod=600 Jan 22 15:48:06 crc kubenswrapper[4825]: I0122 15:48:06.146672 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="4f117d8aef866860d54f3d492ab55e9d654f82ddf841344db75dba9d26403f13" exitCode=0 Jan 22 15:48:06 crc kubenswrapper[4825]: I0122 15:48:06.146732 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"4f117d8aef866860d54f3d492ab55e9d654f82ddf841344db75dba9d26403f13"} Jan 22 15:48:06 crc kubenswrapper[4825]: I0122 15:48:06.147133 4825 scope.go:117] "RemoveContainer" containerID="2e2cd9ccac91574642f11cb7b9691d30ced63e64cba6f480b19075fcb4ac2cb1" Jan 22 15:48:06 crc kubenswrapper[4825]: I0122 15:48:06.392892 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:06 crc kubenswrapper[4825]: I0122 15:48:06.615659 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 15:48:07 crc kubenswrapper[4825]: I0122 15:48:07.161734 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d"} Jan 22 15:48:08 crc kubenswrapper[4825]: I0122 15:48:08.744485 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 15:48:08 crc kubenswrapper[4825]: I0122 15:48:08.745286 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 15:48:08 crc kubenswrapper[4825]: I0122 15:48:08.750325 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 15:48:08 crc kubenswrapper[4825]: I0122 15:48:08.756909 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.182419 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.186495 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.529992 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dllkj"] Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.532229 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.565049 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dllkj"] Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.696547 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7wq\" (UniqueName: \"kubernetes.io/projected/60992c6a-3516-45a6-ab46-7705b343bf46-kube-api-access-hj7wq\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.696717 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-config\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.696774 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.696795 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.696836 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.696865 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-svc\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.798635 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.798690 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-svc\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.798784 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7wq\" (UniqueName: \"kubernetes.io/projected/60992c6a-3516-45a6-ab46-7705b343bf46-kube-api-access-hj7wq\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.798879 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-config\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.798924 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.798942 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.799777 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.799782 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.800410 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-svc\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.800591 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-config\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.800809 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.823549 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7wq\" (UniqueName: \"kubernetes.io/projected/60992c6a-3516-45a6-ab46-7705b343bf46-kube-api-access-hj7wq\") pod \"dnsmasq-dns-54dd998c-dllkj\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:09 crc kubenswrapper[4825]: I0122 15:48:09.857238 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:10 crc kubenswrapper[4825]: I0122 15:48:10.501808 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dllkj"] Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.376174 4825 generic.go:334] "Generic (PLEG): container finished" podID="60992c6a-3516-45a6-ab46-7705b343bf46" containerID="fb33445c888cfd4d541506e30cceeb38a287f28f0b9c865ca1495f18a6829ab6" exitCode=0 Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.376404 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dllkj" event={"ID":"60992c6a-3516-45a6-ab46-7705b343bf46","Type":"ContainerDied","Data":"fb33445c888cfd4d541506e30cceeb38a287f28f0b9c865ca1495f18a6829ab6"} Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.377655 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dllkj" event={"ID":"60992c6a-3516-45a6-ab46-7705b343bf46","Type":"ContainerStarted","Data":"9c18c6bc1c92614a70a6d1a3fccb7cc1b6f3fdf277f0d84ff92b2a0e3a957a92"} Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.389993 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.467401 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.614552 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.614890 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-central-agent" containerID="cri-o://3fa16d745a52b0d8eb1a53955ddaa029a42cd29fbf9a43a3b74b0af248944a2f" gracePeriod=30 Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.615053 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-notification-agent" containerID="cri-o://b98948de4b2e236681339901453f88a3ca638a7983f78903659121bf53181953" gracePeriod=30 Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.615052 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="sg-core" containerID="cri-o://48130e468c1a7dfa8c09d2ac329da1b70f1154de3ed153b322dd826ae28cdbaa" gracePeriod=30 Jan 22 15:48:11 crc kubenswrapper[4825]: I0122 15:48:11.615287 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="proxy-httpd" containerID="cri-o://bf43f0dca51ca20d83504cc9529552d828900b4ea91b8a943862c37653c70a99" gracePeriod=30 Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.716372 4825 generic.go:334] "Generic (PLEG): container finished" podID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerID="bf43f0dca51ca20d83504cc9529552d828900b4ea91b8a943862c37653c70a99" exitCode=0 Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.717878 4825 generic.go:334] "Generic (PLEG): container finished" podID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerID="48130e468c1a7dfa8c09d2ac329da1b70f1154de3ed153b322dd826ae28cdbaa" exitCode=2 Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.717959 4825 generic.go:334] "Generic (PLEG): container finished" podID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerID="3fa16d745a52b0d8eb1a53955ddaa029a42cd29fbf9a43a3b74b0af248944a2f" exitCode=0 Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.716752 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerDied","Data":"bf43f0dca51ca20d83504cc9529552d828900b4ea91b8a943862c37653c70a99"} Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.718198 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerDied","Data":"48130e468c1a7dfa8c09d2ac329da1b70f1154de3ed153b322dd826ae28cdbaa"} Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.718287 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerDied","Data":"3fa16d745a52b0d8eb1a53955ddaa029a42cd29fbf9a43a3b74b0af248944a2f"} Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.737800 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dllkj" event={"ID":"60992c6a-3516-45a6-ab46-7705b343bf46","Type":"ContainerStarted","Data":"ba809397fea502c09c2186b8a90756d6ed5509e6e8b2c26c43da9495af3628c0"} Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.737880 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.762279 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 15:48:12 crc kubenswrapper[4825]: I0122 15:48:12.767553 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-dllkj" podStartSLOduration=3.767532864 podStartE2EDuration="3.767532864s" podCreationTimestamp="2026-01-22 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:12.762628306 +0000 UTC m=+1439.524155206" watchObservedRunningTime="2026-01-22 15:48:12.767532864 +0000 UTC m=+1439.529059764" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.214508 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pt4fj"] Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.216323 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.220427 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.220592 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.229659 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-config-data\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.229794 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.229818 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhj76\" (UniqueName: \"kubernetes.io/projected/020cd9b5-8960-4a30-8322-c1de670f2f10-kube-api-access-lhj76\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.229838 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-scripts\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.239947 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pt4fj"] Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.331918 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-config-data\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.332158 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.332215 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhj76\" (UniqueName: \"kubernetes.io/projected/020cd9b5-8960-4a30-8322-c1de670f2f10-kube-api-access-lhj76\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.332249 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-scripts\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.340050 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-config-data\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.351504 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-scripts\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.353695 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhj76\" (UniqueName: \"kubernetes.io/projected/020cd9b5-8960-4a30-8322-c1de670f2f10-kube-api-access-lhj76\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.363575 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pt4fj\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.552431 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.857156 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.861378 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-log" containerID="cri-o://26c368ff9baa77c02dfe169bf835f9e4199f9735e16dd07d11a321f2142888ce" gracePeriod=30 Jan 22 15:48:13 crc kubenswrapper[4825]: I0122 15:48:13.862500 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-api" containerID="cri-o://4d84713498475e4c8851078ff630b76c551d177d9c3403b329c2a80e8061cf79" gracePeriod=30 Jan 22 15:48:14 crc kubenswrapper[4825]: I0122 15:48:14.355884 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pt4fj"] Jan 22 15:48:14 crc kubenswrapper[4825]: I0122 15:48:14.906620 4825 generic.go:334] "Generic (PLEG): container finished" podID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerID="26c368ff9baa77c02dfe169bf835f9e4199f9735e16dd07d11a321f2142888ce" exitCode=143 Jan 22 15:48:14 crc kubenswrapper[4825]: I0122 15:48:14.906707 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc67821-b410-4ee3-b701-dec227fc8c56","Type":"ContainerDied","Data":"26c368ff9baa77c02dfe169bf835f9e4199f9735e16dd07d11a321f2142888ce"} Jan 22 15:48:14 crc kubenswrapper[4825]: I0122 15:48:14.912763 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pt4fj" event={"ID":"020cd9b5-8960-4a30-8322-c1de670f2f10","Type":"ContainerStarted","Data":"4f70e69e07fe0c12bcb0cbc744d21b186f94643f57ca9ca3a58f70735bcdc862"} Jan 22 15:48:14 crc kubenswrapper[4825]: I0122 15:48:14.912846 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pt4fj" event={"ID":"020cd9b5-8960-4a30-8322-c1de670f2f10","Type":"ContainerStarted","Data":"96422bed906fe1eb77f7bd7652fed2e623385a730daa54e76b446fb2d760c0af"} Jan 22 15:48:14 crc kubenswrapper[4825]: I0122 15:48:14.935358 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pt4fj" podStartSLOduration=1.935336567 podStartE2EDuration="1.935336567s" podCreationTimestamp="2026-01-22 15:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:14.927775164 +0000 UTC m=+1441.689302074" watchObservedRunningTime="2026-01-22 15:48:14.935336567 +0000 UTC m=+1441.696863477" Jan 22 15:48:15 crc kubenswrapper[4825]: I0122 15:48:15.932025 4825 generic.go:334] "Generic (PLEG): container finished" podID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerID="b98948de4b2e236681339901453f88a3ca638a7983f78903659121bf53181953" exitCode=0 Jan 22 15:48:15 crc kubenswrapper[4825]: I0122 15:48:15.932108 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerDied","Data":"b98948de4b2e236681339901453f88a3ca638a7983f78903659121bf53181953"} Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.317744 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.406681 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-combined-ca-bundle\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.406804 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-log-httpd\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.406876 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-sg-core-conf-yaml\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.406949 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-run-httpd\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.407037 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-scripts\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.407168 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-ceilometer-tls-certs\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.407227 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp4qd\" (UniqueName: \"kubernetes.io/projected/6a3f8382-fadc-4144-83c0-b378ebcbdef1-kube-api-access-sp4qd\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.407280 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-config-data\") pod \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\" (UID: \"6a3f8382-fadc-4144-83c0-b378ebcbdef1\") " Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.407860 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.411465 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.428605 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-scripts" (OuterVolumeSpecName: "scripts") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.432480 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3f8382-fadc-4144-83c0-b378ebcbdef1-kube-api-access-sp4qd" (OuterVolumeSpecName: "kube-api-access-sp4qd") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "kube-api-access-sp4qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.470640 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.637122 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.651132 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.651176 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.651186 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.651197 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp4qd\" (UniqueName: \"kubernetes.io/projected/6a3f8382-fadc-4144-83c0-b378ebcbdef1-kube-api-access-sp4qd\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.651210 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a3f8382-fadc-4144-83c0-b378ebcbdef1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.711785 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.755964 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:16 crc kubenswrapper[4825]: I0122 15:48:16.756357 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.070621 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a3f8382-fadc-4144-83c0-b378ebcbdef1","Type":"ContainerDied","Data":"2d41b50a394fc28c379b023243761898021710803b380c4827ecb9439358eca6"} Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.071833 4825 scope.go:117] "RemoveContainer" containerID="bf43f0dca51ca20d83504cc9529552d828900b4ea91b8a943862c37653c70a99" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.070685 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.103036 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-config-data" (OuterVolumeSpecName: "config-data") pod "6a3f8382-fadc-4144-83c0-b378ebcbdef1" (UID: "6a3f8382-fadc-4144-83c0-b378ebcbdef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.110300 4825 scope.go:117] "RemoveContainer" containerID="48130e468c1a7dfa8c09d2ac329da1b70f1154de3ed153b322dd826ae28cdbaa" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.175513 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3f8382-fadc-4144-83c0-b378ebcbdef1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.232858 4825 scope.go:117] "RemoveContainer" containerID="b98948de4b2e236681339901453f88a3ca638a7983f78903659121bf53181953" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.257192 4825 scope.go:117] "RemoveContainer" containerID="3fa16d745a52b0d8eb1a53955ddaa029a42cd29fbf9a43a3b74b0af248944a2f" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.419743 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.444023 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459126 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:17 crc kubenswrapper[4825]: E0122 15:48:17.459609 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-central-agent" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459629 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-central-agent" Jan 22 15:48:17 crc kubenswrapper[4825]: E0122 15:48:17.459648 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-notification-agent" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459656 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-notification-agent" Jan 22 15:48:17 crc kubenswrapper[4825]: E0122 15:48:17.459670 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="sg-core" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459676 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="sg-core" Jan 22 15:48:17 crc kubenswrapper[4825]: E0122 15:48:17.459692 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="proxy-httpd" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459698 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="proxy-httpd" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459922 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-notification-agent" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459951 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="sg-core" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459964 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="proxy-httpd" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.459971 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" containerName="ceilometer-central-agent" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.461922 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.465433 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.468874 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.471961 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.478442 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.507917 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.507952 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-config-data\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.508031 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-log-httpd\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.508097 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-scripts\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.508306 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bcs\" (UniqueName: \"kubernetes.io/projected/79d10c93-0240-4a92-9205-7ecc258e0c49-kube-api-access-j9bcs\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.508369 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.508444 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-run-httpd\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.508493 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.529520 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3f8382-fadc-4144-83c0-b378ebcbdef1" path="/var/lib/kubelet/pods/6a3f8382-fadc-4144-83c0-b378ebcbdef1/volumes" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612353 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612506 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612533 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-config-data\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612570 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-log-httpd\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612642 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-scripts\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612723 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9bcs\" (UniqueName: \"kubernetes.io/projected/79d10c93-0240-4a92-9205-7ecc258e0c49-kube-api-access-j9bcs\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612749 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.612785 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-run-httpd\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.614788 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-log-httpd\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.615193 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-run-httpd\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.618778 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.624745 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.625358 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.627401 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-config-data\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.627481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-scripts\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.634630 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9bcs\" (UniqueName: \"kubernetes.io/projected/79d10c93-0240-4a92-9205-7ecc258e0c49-kube-api-access-j9bcs\") pod \"ceilometer-0\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " pod="openstack/ceilometer-0" Jan 22 15:48:17 crc kubenswrapper[4825]: I0122 15:48:17.823623 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.094156 4825 generic.go:334] "Generic (PLEG): container finished" podID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerID="4d84713498475e4c8851078ff630b76c551d177d9c3403b329c2a80e8061cf79" exitCode=0 Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.094208 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc67821-b410-4ee3-b701-dec227fc8c56","Type":"ContainerDied","Data":"4d84713498475e4c8851078ff630b76c551d177d9c3403b329c2a80e8061cf79"} Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.408460 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.421150 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.466184 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.533428 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-config-data\") pod \"1bc67821-b410-4ee3-b701-dec227fc8c56\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.534391 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlzd5\" (UniqueName: \"kubernetes.io/projected/1bc67821-b410-4ee3-b701-dec227fc8c56-kube-api-access-rlzd5\") pod \"1bc67821-b410-4ee3-b701-dec227fc8c56\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.534756 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-combined-ca-bundle\") pod \"1bc67821-b410-4ee3-b701-dec227fc8c56\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.534864 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc67821-b410-4ee3-b701-dec227fc8c56-logs\") pod \"1bc67821-b410-4ee3-b701-dec227fc8c56\" (UID: \"1bc67821-b410-4ee3-b701-dec227fc8c56\") " Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.535723 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc67821-b410-4ee3-b701-dec227fc8c56-logs" (OuterVolumeSpecName: "logs") pod "1bc67821-b410-4ee3-b701-dec227fc8c56" (UID: "1bc67821-b410-4ee3-b701-dec227fc8c56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.541191 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc67821-b410-4ee3-b701-dec227fc8c56-kube-api-access-rlzd5" (OuterVolumeSpecName: "kube-api-access-rlzd5") pod "1bc67821-b410-4ee3-b701-dec227fc8c56" (UID: "1bc67821-b410-4ee3-b701-dec227fc8c56"). InnerVolumeSpecName "kube-api-access-rlzd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.566556 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bc67821-b410-4ee3-b701-dec227fc8c56" (UID: "1bc67821-b410-4ee3-b701-dec227fc8c56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.571778 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-config-data" (OuterVolumeSpecName: "config-data") pod "1bc67821-b410-4ee3-b701-dec227fc8c56" (UID: "1bc67821-b410-4ee3-b701-dec227fc8c56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.637811 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.637857 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bc67821-b410-4ee3-b701-dec227fc8c56-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.637871 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc67821-b410-4ee3-b701-dec227fc8c56-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:18 crc kubenswrapper[4825]: I0122 15:48:18.637883 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlzd5\" (UniqueName: \"kubernetes.io/projected/1bc67821-b410-4ee3-b701-dec227fc8c56-kube-api-access-rlzd5\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.180158 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.180177 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bc67821-b410-4ee3-b701-dec227fc8c56","Type":"ContainerDied","Data":"fae71e0f2f8b3cc82cca3d3e1e33d9d651e228e78c402c78539d762e900685af"} Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.180288 4825 scope.go:117] "RemoveContainer" containerID="4d84713498475e4c8851078ff630b76c551d177d9c3403b329c2a80e8061cf79" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.184625 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerStarted","Data":"b29c11cd7ee8be63aad514b2543f3d9b72334b55a8320f224d95951c28a60101"} Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.206462 4825 scope.go:117] "RemoveContainer" containerID="26c368ff9baa77c02dfe169bf835f9e4199f9735e16dd07d11a321f2142888ce" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.226800 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.247952 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.260925 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:19 crc kubenswrapper[4825]: E0122 15:48:19.261440 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-api" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.261460 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-api" Jan 22 15:48:19 crc kubenswrapper[4825]: E0122 15:48:19.261476 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-log" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.261482 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-log" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.263052 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-api" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.263082 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" containerName="nova-api-log" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.264646 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.269387 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.269447 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.269740 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.277667 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.371462 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-public-tls-certs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.371560 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtsk\" (UniqueName: \"kubernetes.io/projected/08b56aa0-d47b-4742-92ea-ecc106776bd4-kube-api-access-nqtsk\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.371632 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b56aa0-d47b-4742-92ea-ecc106776bd4-logs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.371711 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-config-data\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.371728 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.371766 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.474134 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-config-data\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.474193 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.474293 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.474472 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-public-tls-certs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.474563 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtsk\" (UniqueName: \"kubernetes.io/projected/08b56aa0-d47b-4742-92ea-ecc106776bd4-kube-api-access-nqtsk\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.474683 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b56aa0-d47b-4742-92ea-ecc106776bd4-logs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.475372 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b56aa0-d47b-4742-92ea-ecc106776bd4-logs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.478597 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.479623 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-public-tls-certs\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.480320 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-config-data\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.482425 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.498781 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtsk\" (UniqueName: \"kubernetes.io/projected/08b56aa0-d47b-4742-92ea-ecc106776bd4-kube-api-access-nqtsk\") pod \"nova-api-0\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.541713 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc67821-b410-4ee3-b701-dec227fc8c56" path="/var/lib/kubelet/pods/1bc67821-b410-4ee3-b701-dec227fc8c56/volumes" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.594854 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.863140 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.948546 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-2nzzl"] Jan 22 15:48:19 crc kubenswrapper[4825]: I0122 15:48:19.948846 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerName="dnsmasq-dns" containerID="cri-o://fa96f120abd6843a27e0f02e89394736626c3a412c9a65b8a8bf4970d4abc943" gracePeriod=10 Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.172596 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.254190 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"08b56aa0-d47b-4742-92ea-ecc106776bd4","Type":"ContainerStarted","Data":"5142685a5616ccf4283e5380b7f590e24e6c29656a17d8e11634ffbd593ac038"} Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.269381 4825 generic.go:334] "Generic (PLEG): container finished" podID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerID="fa96f120abd6843a27e0f02e89394736626c3a412c9a65b8a8bf4970d4abc943" exitCode=0 Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.269501 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" event={"ID":"56dacd23-6234-4d06-968b-ed6a51d03f70","Type":"ContainerDied","Data":"fa96f120abd6843a27e0f02e89394736626c3a412c9a65b8a8bf4970d4abc943"} Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.272787 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerStarted","Data":"d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3"} Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.536327 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.609826 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-sb\") pod \"56dacd23-6234-4d06-968b-ed6a51d03f70\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.610152 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-config\") pod \"56dacd23-6234-4d06-968b-ed6a51d03f70\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.610258 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5l7v\" (UniqueName: \"kubernetes.io/projected/56dacd23-6234-4d06-968b-ed6a51d03f70-kube-api-access-p5l7v\") pod \"56dacd23-6234-4d06-968b-ed6a51d03f70\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.610458 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-swift-storage-0\") pod \"56dacd23-6234-4d06-968b-ed6a51d03f70\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.610629 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-nb\") pod \"56dacd23-6234-4d06-968b-ed6a51d03f70\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.610862 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-svc\") pod \"56dacd23-6234-4d06-968b-ed6a51d03f70\" (UID: \"56dacd23-6234-4d06-968b-ed6a51d03f70\") " Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.636503 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dacd23-6234-4d06-968b-ed6a51d03f70-kube-api-access-p5l7v" (OuterVolumeSpecName: "kube-api-access-p5l7v") pod "56dacd23-6234-4d06-968b-ed6a51d03f70" (UID: "56dacd23-6234-4d06-968b-ed6a51d03f70"). InnerVolumeSpecName "kube-api-access-p5l7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.714073 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5l7v\" (UniqueName: \"kubernetes.io/projected/56dacd23-6234-4d06-968b-ed6a51d03f70-kube-api-access-p5l7v\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.730331 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56dacd23-6234-4d06-968b-ed6a51d03f70" (UID: "56dacd23-6234-4d06-968b-ed6a51d03f70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.757651 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-config" (OuterVolumeSpecName: "config") pod "56dacd23-6234-4d06-968b-ed6a51d03f70" (UID: "56dacd23-6234-4d06-968b-ed6a51d03f70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.762539 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56dacd23-6234-4d06-968b-ed6a51d03f70" (UID: "56dacd23-6234-4d06-968b-ed6a51d03f70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.768003 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56dacd23-6234-4d06-968b-ed6a51d03f70" (UID: "56dacd23-6234-4d06-968b-ed6a51d03f70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.799759 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56dacd23-6234-4d06-968b-ed6a51d03f70" (UID: "56dacd23-6234-4d06-968b-ed6a51d03f70"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.822967 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.823028 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.823045 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.823056 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:20 crc kubenswrapper[4825]: I0122 15:48:20.823069 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56dacd23-6234-4d06-968b-ed6a51d03f70-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.284688 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"08b56aa0-d47b-4742-92ea-ecc106776bd4","Type":"ContainerStarted","Data":"cb3d640515e4d069a3aee78de8fedc7cc0fece72c5a788bdabf59dfd2a32b789"} Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.284999 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"08b56aa0-d47b-4742-92ea-ecc106776bd4","Type":"ContainerStarted","Data":"1b01bcb1986270592b766c84c6e3eabc89d23db2ef19539734c15a0ca0f3eecb"} Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.286808 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" event={"ID":"56dacd23-6234-4d06-968b-ed6a51d03f70","Type":"ContainerDied","Data":"b8fedb6d422eb0691c2d5a57bd485b54a5b13090839b1c44731386e6d7b4adbb"} Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.286834 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-2nzzl" Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.286883 4825 scope.go:117] "RemoveContainer" containerID="fa96f120abd6843a27e0f02e89394736626c3a412c9a65b8a8bf4970d4abc943" Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.290061 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerStarted","Data":"978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46"} Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.290098 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerStarted","Data":"32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5"} Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.318005 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.317964725 podStartE2EDuration="2.317964725s" podCreationTimestamp="2026-01-22 15:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:21.309803895 +0000 UTC m=+1448.071330825" watchObservedRunningTime="2026-01-22 15:48:21.317964725 +0000 UTC m=+1448.079491635" Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.328631 4825 scope.go:117] "RemoveContainer" containerID="1c202468e52229edf2796b077e04e5ab359d5e4213179ef0e62aa5afe28d40b9" Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.337853 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-2nzzl"] Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.349516 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-2nzzl"] Jan 22 15:48:21 crc kubenswrapper[4825]: I0122 15:48:21.528903 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" path="/var/lib/kubelet/pods/56dacd23-6234-4d06-968b-ed6a51d03f70/volumes" Jan 22 15:48:22 crc kubenswrapper[4825]: I0122 15:48:22.301158 4825 generic.go:334] "Generic (PLEG): container finished" podID="020cd9b5-8960-4a30-8322-c1de670f2f10" containerID="4f70e69e07fe0c12bcb0cbc744d21b186f94643f57ca9ca3a58f70735bcdc862" exitCode=0 Jan 22 15:48:22 crc kubenswrapper[4825]: I0122 15:48:22.301241 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pt4fj" event={"ID":"020cd9b5-8960-4a30-8322-c1de670f2f10","Type":"ContainerDied","Data":"4f70e69e07fe0c12bcb0cbc744d21b186f94643f57ca9ca3a58f70735bcdc862"} Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.795329 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.821541 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhj76\" (UniqueName: \"kubernetes.io/projected/020cd9b5-8960-4a30-8322-c1de670f2f10-kube-api-access-lhj76\") pod \"020cd9b5-8960-4a30-8322-c1de670f2f10\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.821677 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-combined-ca-bundle\") pod \"020cd9b5-8960-4a30-8322-c1de670f2f10\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.821731 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-scripts\") pod \"020cd9b5-8960-4a30-8322-c1de670f2f10\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.821745 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-config-data\") pod \"020cd9b5-8960-4a30-8322-c1de670f2f10\" (UID: \"020cd9b5-8960-4a30-8322-c1de670f2f10\") " Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.870553 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020cd9b5-8960-4a30-8322-c1de670f2f10-kube-api-access-lhj76" (OuterVolumeSpecName: "kube-api-access-lhj76") pod "020cd9b5-8960-4a30-8322-c1de670f2f10" (UID: "020cd9b5-8960-4a30-8322-c1de670f2f10"). InnerVolumeSpecName "kube-api-access-lhj76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.873222 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-scripts" (OuterVolumeSpecName: "scripts") pod "020cd9b5-8960-4a30-8322-c1de670f2f10" (UID: "020cd9b5-8960-4a30-8322-c1de670f2f10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.876531 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-config-data" (OuterVolumeSpecName: "config-data") pod "020cd9b5-8960-4a30-8322-c1de670f2f10" (UID: "020cd9b5-8960-4a30-8322-c1de670f2f10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.883527 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "020cd9b5-8960-4a30-8322-c1de670f2f10" (UID: "020cd9b5-8960-4a30-8322-c1de670f2f10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.924068 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.924097 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.924106 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020cd9b5-8960-4a30-8322-c1de670f2f10-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:23 crc kubenswrapper[4825]: I0122 15:48:23.924117 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhj76\" (UniqueName: \"kubernetes.io/projected/020cd9b5-8960-4a30-8322-c1de670f2f10-kube-api-access-lhj76\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.327245 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pt4fj" event={"ID":"020cd9b5-8960-4a30-8322-c1de670f2f10","Type":"ContainerDied","Data":"96422bed906fe1eb77f7bd7652fed2e623385a730daa54e76b446fb2d760c0af"} Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.327294 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96422bed906fe1eb77f7bd7652fed2e623385a730daa54e76b446fb2d760c0af" Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.327358 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pt4fj" Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.334706 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerStarted","Data":"22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc"} Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.334899 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-central-agent" containerID="cri-o://d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.335368 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.335664 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="proxy-httpd" containerID="cri-o://22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.335713 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="sg-core" containerID="cri-o://978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.335746 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-notification-agent" containerID="cri-o://32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.375556 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7623711699999998 podStartE2EDuration="7.375534572s" podCreationTimestamp="2026-01-22 15:48:17 +0000 UTC" firstStartedPulling="2026-01-22 15:48:18.480665289 +0000 UTC m=+1445.242192199" lastFinishedPulling="2026-01-22 15:48:23.093828691 +0000 UTC m=+1449.855355601" observedRunningTime="2026-01-22 15:48:24.359365517 +0000 UTC m=+1451.120892427" watchObservedRunningTime="2026-01-22 15:48:24.375534572 +0000 UTC m=+1451.137061502" Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.591681 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.591953 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-log" containerID="cri-o://1b01bcb1986270592b766c84c6e3eabc89d23db2ef19539734c15a0ca0f3eecb" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.592151 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-api" containerID="cri-o://cb3d640515e4d069a3aee78de8fedc7cc0fece72c5a788bdabf59dfd2a32b789" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.629095 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.629362 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" containerName="nova-scheduler-scheduler" containerID="cri-o://02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.644668 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.644940 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-log" containerID="cri-o://db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: I0122 15:48:24.645441 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-metadata" containerID="cri-o://dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92" gracePeriod=30 Jan 22 15:48:24 crc kubenswrapper[4825]: E0122 15:48:24.787289 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d10c93_0240_4a92_9205_7ecc258e0c49.slice/crio-conmon-22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d10c93_0240_4a92_9205_7ecc258e0c49.slice/crio-22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod952055c6_1b43_4621_9fd9_4078d8539301.slice/crio-conmon-db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b56aa0_d47b_4742_92ea_ecc106776bd4.slice/crio-conmon-1b01bcb1986270592b766c84c6e3eabc89d23db2ef19539734c15a0ca0f3eecb.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:48:25 crc kubenswrapper[4825]: E0122 15:48:25.184184 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 15:48:25 crc kubenswrapper[4825]: E0122 15:48:25.188095 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 15:48:25 crc kubenswrapper[4825]: E0122 15:48:25.190336 4825 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 15:48:25 crc kubenswrapper[4825]: E0122 15:48:25.190418 4825 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" containerName="nova-scheduler-scheduler" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.349528 4825 generic.go:334] "Generic (PLEG): container finished" podID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerID="cb3d640515e4d069a3aee78de8fedc7cc0fece72c5a788bdabf59dfd2a32b789" exitCode=0 Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.349570 4825 generic.go:334] "Generic (PLEG): container finished" podID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerID="1b01bcb1986270592b766c84c6e3eabc89d23db2ef19539734c15a0ca0f3eecb" exitCode=143 Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.349635 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"08b56aa0-d47b-4742-92ea-ecc106776bd4","Type":"ContainerDied","Data":"cb3d640515e4d069a3aee78de8fedc7cc0fece72c5a788bdabf59dfd2a32b789"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.349791 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"08b56aa0-d47b-4742-92ea-ecc106776bd4","Type":"ContainerDied","Data":"1b01bcb1986270592b766c84c6e3eabc89d23db2ef19539734c15a0ca0f3eecb"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.349842 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"08b56aa0-d47b-4742-92ea-ecc106776bd4","Type":"ContainerDied","Data":"5142685a5616ccf4283e5380b7f590e24e6c29656a17d8e11634ffbd593ac038"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.349860 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5142685a5616ccf4283e5380b7f590e24e6c29656a17d8e11634ffbd593ac038" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.352561 4825 generic.go:334] "Generic (PLEG): container finished" podID="952055c6-1b43-4621-9fd9-4078d8539301" containerID="db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f" exitCode=143 Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.352637 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952055c6-1b43-4621-9fd9-4078d8539301","Type":"ContainerDied","Data":"db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.355557 4825 generic.go:334] "Generic (PLEG): container finished" podID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerID="22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc" exitCode=0 Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.355595 4825 generic.go:334] "Generic (PLEG): container finished" podID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerID="978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46" exitCode=2 Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.355605 4825 generic.go:334] "Generic (PLEG): container finished" podID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerID="32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5" exitCode=0 Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.355626 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerDied","Data":"22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.355673 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerDied","Data":"978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.355686 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerDied","Data":"32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5"} Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.405532 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.568173 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-combined-ca-bundle\") pod \"08b56aa0-d47b-4742-92ea-ecc106776bd4\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.568255 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-public-tls-certs\") pod \"08b56aa0-d47b-4742-92ea-ecc106776bd4\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.568310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-internal-tls-certs\") pod \"08b56aa0-d47b-4742-92ea-ecc106776bd4\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.568372 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtsk\" (UniqueName: \"kubernetes.io/projected/08b56aa0-d47b-4742-92ea-ecc106776bd4-kube-api-access-nqtsk\") pod \"08b56aa0-d47b-4742-92ea-ecc106776bd4\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.568500 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b56aa0-d47b-4742-92ea-ecc106776bd4-logs\") pod \"08b56aa0-d47b-4742-92ea-ecc106776bd4\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.568562 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-config-data\") pod \"08b56aa0-d47b-4742-92ea-ecc106776bd4\" (UID: \"08b56aa0-d47b-4742-92ea-ecc106776bd4\") " Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.584788 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b56aa0-d47b-4742-92ea-ecc106776bd4-logs" (OuterVolumeSpecName: "logs") pod "08b56aa0-d47b-4742-92ea-ecc106776bd4" (UID: "08b56aa0-d47b-4742-92ea-ecc106776bd4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.597346 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b56aa0-d47b-4742-92ea-ecc106776bd4-kube-api-access-nqtsk" (OuterVolumeSpecName: "kube-api-access-nqtsk") pod "08b56aa0-d47b-4742-92ea-ecc106776bd4" (UID: "08b56aa0-d47b-4742-92ea-ecc106776bd4"). InnerVolumeSpecName "kube-api-access-nqtsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.623225 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-config-data" (OuterVolumeSpecName: "config-data") pod "08b56aa0-d47b-4742-92ea-ecc106776bd4" (UID: "08b56aa0-d47b-4742-92ea-ecc106776bd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.660828 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08b56aa0-d47b-4742-92ea-ecc106776bd4" (UID: "08b56aa0-d47b-4742-92ea-ecc106776bd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.667472 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08b56aa0-d47b-4742-92ea-ecc106776bd4" (UID: "08b56aa0-d47b-4742-92ea-ecc106776bd4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.673243 4825 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.673279 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtsk\" (UniqueName: \"kubernetes.io/projected/08b56aa0-d47b-4742-92ea-ecc106776bd4-kube-api-access-nqtsk\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.673295 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b56aa0-d47b-4742-92ea-ecc106776bd4-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.673306 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.673320 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.707484 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08b56aa0-d47b-4742-92ea-ecc106776bd4" (UID: "08b56aa0-d47b-4742-92ea-ecc106776bd4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:25 crc kubenswrapper[4825]: I0122 15:48:25.777313 4825 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b56aa0-d47b-4742-92ea-ecc106776bd4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.320380 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.389073 4825 generic.go:334] "Generic (PLEG): container finished" podID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" exitCode=0 Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.389174 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.391271 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.391404 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a3aa787-741e-4ea0-968b-bd87cf38efc5","Type":"ContainerDied","Data":"02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908"} Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.391451 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a3aa787-741e-4ea0-968b-bd87cf38efc5","Type":"ContainerDied","Data":"44afc6c65e0598ed466e634b5774e26c6be81e8632e8974aa8dd9b85ab6d7ae7"} Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.391472 4825 scope.go:117] "RemoveContainer" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.437572 4825 scope.go:117] "RemoveContainer" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.438490 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908\": container with ID starting with 02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908 not found: ID does not exist" containerID="02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.438529 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908"} err="failed to get container status \"02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908\": rpc error: code = NotFound desc = could not find container \"02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908\": container with ID starting with 02b12f2c3c0fa48194e78c653fdb99a573a1dcf9c273d4f7746d8c6fcdba8908 not found: ID does not exist" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.446015 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.460519 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.473839 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.474446 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerName="dnsmasq-dns" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474473 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerName="dnsmasq-dns" Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.474489 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" containerName="nova-scheduler-scheduler" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474498 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" containerName="nova-scheduler-scheduler" Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.474512 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-api" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474520 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-api" Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.474552 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerName="init" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474565 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerName="init" Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.474579 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-log" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474586 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-log" Jan 22 15:48:26 crc kubenswrapper[4825]: E0122 15:48:26.474617 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020cd9b5-8960-4a30-8322-c1de670f2f10" containerName="nova-manage" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474627 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="020cd9b5-8960-4a30-8322-c1de670f2f10" containerName="nova-manage" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474894 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="020cd9b5-8960-4a30-8322-c1de670f2f10" containerName="nova-manage" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474920 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dacd23-6234-4d06-968b-ed6a51d03f70" containerName="dnsmasq-dns" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474935 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" containerName="nova-scheduler-scheduler" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474959 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-log" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.474971 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" containerName="nova-api-api" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.476494 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.480808 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.481029 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.481197 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.486941 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.495637 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrpdg\" (UniqueName: \"kubernetes.io/projected/2a3aa787-741e-4ea0-968b-bd87cf38efc5-kube-api-access-rrpdg\") pod \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.495735 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-config-data\") pod \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.495876 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-combined-ca-bundle\") pod \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\" (UID: \"2a3aa787-741e-4ea0-968b-bd87cf38efc5\") " Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.505110 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3aa787-741e-4ea0-968b-bd87cf38efc5-kube-api-access-rrpdg" (OuterVolumeSpecName: "kube-api-access-rrpdg") pod "2a3aa787-741e-4ea0-968b-bd87cf38efc5" (UID: "2a3aa787-741e-4ea0-968b-bd87cf38efc5"). InnerVolumeSpecName "kube-api-access-rrpdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.534054 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-config-data" (OuterVolumeSpecName: "config-data") pod "2a3aa787-741e-4ea0-968b-bd87cf38efc5" (UID: "2a3aa787-741e-4ea0-968b-bd87cf38efc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.534328 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a3aa787-741e-4ea0-968b-bd87cf38efc5" (UID: "2a3aa787-741e-4ea0-968b-bd87cf38efc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.598912 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-internal-tls-certs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599233 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77j5\" (UniqueName: \"kubernetes.io/projected/151dca33-da19-4a32-948e-ec8bc6d14829-kube-api-access-z77j5\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599274 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-config-data\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599333 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/151dca33-da19-4a32-948e-ec8bc6d14829-logs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599410 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-public-tls-certs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599521 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599673 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599696 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrpdg\" (UniqueName: \"kubernetes.io/projected/2a3aa787-741e-4ea0-968b-bd87cf38efc5-kube-api-access-rrpdg\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.599709 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3aa787-741e-4ea0-968b-bd87cf38efc5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.701517 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.701825 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-internal-tls-certs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.701872 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z77j5\" (UniqueName: \"kubernetes.io/projected/151dca33-da19-4a32-948e-ec8bc6d14829-kube-api-access-z77j5\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.701911 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-config-data\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.702031 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/151dca33-da19-4a32-948e-ec8bc6d14829-logs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.702187 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-public-tls-certs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.704937 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/151dca33-da19-4a32-948e-ec8bc6d14829-logs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.707428 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-public-tls-certs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.710818 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-config-data\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.712797 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-internal-tls-certs\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.717667 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151dca33-da19-4a32-948e-ec8bc6d14829-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.735709 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z77j5\" (UniqueName: \"kubernetes.io/projected/151dca33-da19-4a32-948e-ec8bc6d14829-kube-api-access-z77j5\") pod \"nova-api-0\" (UID: \"151dca33-da19-4a32-948e-ec8bc6d14829\") " pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.768941 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.806635 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.807576 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.824543 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.826678 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.833559 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 15:48:26 crc kubenswrapper[4825]: I0122 15:48:26.866845 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.010644 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zqm\" (UniqueName: \"kubernetes.io/projected/ac2aec93-3b95-48dd-8799-49e89387ab25-kube-api-access-z7zqm\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.011348 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2aec93-3b95-48dd-8799-49e89387ab25-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.011539 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2aec93-3b95-48dd-8799-49e89387ab25-config-data\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.030850 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.114207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2aec93-3b95-48dd-8799-49e89387ab25-config-data\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.114379 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7zqm\" (UniqueName: \"kubernetes.io/projected/ac2aec93-3b95-48dd-8799-49e89387ab25-kube-api-access-z7zqm\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.114430 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2aec93-3b95-48dd-8799-49e89387ab25-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.120968 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac2aec93-3b95-48dd-8799-49e89387ab25-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.121011 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac2aec93-3b95-48dd-8799-49e89387ab25-config-data\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.136275 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7zqm\" (UniqueName: \"kubernetes.io/projected/ac2aec93-3b95-48dd-8799-49e89387ab25-kube-api-access-z7zqm\") pod \"nova-scheduler-0\" (UID: \"ac2aec93-3b95-48dd-8799-49e89387ab25\") " pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.166654 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.215544 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-sg-core-conf-yaml\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.215708 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-log-httpd\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216345 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216456 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-combined-ca-bundle\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216491 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-config-data\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216536 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9bcs\" (UniqueName: \"kubernetes.io/projected/79d10c93-0240-4a92-9205-7ecc258e0c49-kube-api-access-j9bcs\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216672 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-scripts\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216711 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-ceilometer-tls-certs\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.216759 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-run-httpd\") pod \"79d10c93-0240-4a92-9205-7ecc258e0c49\" (UID: \"79d10c93-0240-4a92-9205-7ecc258e0c49\") " Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.217599 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.218500 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.218524 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79d10c93-0240-4a92-9205-7ecc258e0c49-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.221333 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-scripts" (OuterVolumeSpecName: "scripts") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.221419 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d10c93-0240-4a92-9205-7ecc258e0c49-kube-api-access-j9bcs" (OuterVolumeSpecName: "kube-api-access-j9bcs") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "kube-api-access-j9bcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.259411 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.288520 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.303120 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.321042 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.321073 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.321084 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.321094 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.321105 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9bcs\" (UniqueName: \"kubernetes.io/projected/79d10c93-0240-4a92-9205-7ecc258e0c49-kube-api-access-j9bcs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.348932 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.369408 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-config-data" (OuterVolumeSpecName: "config-data") pod "79d10c93-0240-4a92-9205-7ecc258e0c49" (UID: "79d10c93-0240-4a92-9205-7ecc258e0c49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.403783 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"151dca33-da19-4a32-948e-ec8bc6d14829","Type":"ContainerStarted","Data":"03368efdd1b8fb2ad352d3d6715cd60a0843cb248a67c8985f685686fe2689a4"} Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.406452 4825 generic.go:334] "Generic (PLEG): container finished" podID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerID="d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3" exitCode=0 Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.406486 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerDied","Data":"d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3"} Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.406508 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.406522 4825 scope.go:117] "RemoveContainer" containerID="22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.406508 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79d10c93-0240-4a92-9205-7ecc258e0c49","Type":"ContainerDied","Data":"b29c11cd7ee8be63aad514b2543f3d9b72334b55a8320f224d95951c28a60101"} Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.430973 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d10c93-0240-4a92-9205-7ecc258e0c49-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.451263 4825 scope.go:117] "RemoveContainer" containerID="978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.476880 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.481525 4825 scope.go:117] "RemoveContainer" containerID="32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.485852 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.516286 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.516832 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="sg-core" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.516853 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="sg-core" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.516866 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-notification-agent" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.516875 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-notification-agent" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.516892 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="proxy-httpd" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.516900 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="proxy-httpd" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.516927 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-central-agent" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.516936 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-central-agent" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.517203 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="sg-core" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.517223 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-central-agent" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.517233 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="proxy-httpd" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.517263 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" containerName="ceilometer-notification-agent" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.521678 4825 scope.go:117] "RemoveContainer" containerID="d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.523892 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.529016 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.529108 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.529184 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.539298 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b56aa0-d47b-4742-92ea-ecc106776bd4" path="/var/lib/kubelet/pods/08b56aa0-d47b-4742-92ea-ecc106776bd4/volumes" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.540032 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3aa787-741e-4ea0-968b-bd87cf38efc5" path="/var/lib/kubelet/pods/2a3aa787-741e-4ea0-968b-bd87cf38efc5/volumes" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.540691 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d10c93-0240-4a92-9205-7ecc258e0c49" path="/var/lib/kubelet/pods/79d10c93-0240-4a92-9205-7ecc258e0c49/volumes" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.542864 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.559044 4825 scope.go:117] "RemoveContainer" containerID="22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.559419 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc\": container with ID starting with 22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc not found: ID does not exist" containerID="22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.559460 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc"} err="failed to get container status \"22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc\": rpc error: code = NotFound desc = could not find container \"22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc\": container with ID starting with 22df6c5d82bf9e733c15073dc1898f199d64b36b6b0431d696ba5456229aabdc not found: ID does not exist" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.559482 4825 scope.go:117] "RemoveContainer" containerID="978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.559700 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46\": container with ID starting with 978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46 not found: ID does not exist" containerID="978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.559729 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46"} err="failed to get container status \"978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46\": rpc error: code = NotFound desc = could not find container \"978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46\": container with ID starting with 978334a91a15bd1fc890686aa3afb0eac368a82b60877a78a17a90dd4287dc46 not found: ID does not exist" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.559746 4825 scope.go:117] "RemoveContainer" containerID="32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.560012 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5\": container with ID starting with 32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5 not found: ID does not exist" containerID="32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.560036 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5"} err="failed to get container status \"32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5\": rpc error: code = NotFound desc = could not find container \"32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5\": container with ID starting with 32014d6ef2306c1dbc5b119d3d4a34b690ccb37c24c9d27cee69d8f1f3be2ff5 not found: ID does not exist" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.560053 4825 scope.go:117] "RemoveContainer" containerID="d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3" Jan 22 15:48:27 crc kubenswrapper[4825]: E0122 15:48:27.560300 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3\": container with ID starting with d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3 not found: ID does not exist" containerID="d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.560322 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3"} err="failed to get container status \"d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3\": rpc error: code = NotFound desc = could not find container \"d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3\": container with ID starting with d6ea2d3450ac0ca2917c21fd9b22937d7cd4ec843a9f4fbaea6d835004c358a3 not found: ID does not exist" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.625205 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.634790 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.634889 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.635027 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-log-httpd\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.635064 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgz4\" (UniqueName: \"kubernetes.io/projected/b4abeda5-c4cb-4684-8829-7dbc545f31bb-kube-api-access-rmgz4\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.635110 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.635206 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-scripts\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.635244 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-config-data\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.635273 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-run-httpd\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737068 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-config-data\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737427 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-run-httpd\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737504 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737568 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737652 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-log-httpd\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737686 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmgz4\" (UniqueName: \"kubernetes.io/projected/b4abeda5-c4cb-4684-8829-7dbc545f31bb-kube-api-access-rmgz4\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737732 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.737811 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-scripts\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.738213 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-log-httpd\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.738403 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-run-httpd\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.742614 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.745091 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.746899 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-scripts\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.747865 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-config-data\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.748033 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.758260 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmgz4\" (UniqueName: \"kubernetes.io/projected/b4abeda5-c4cb-4684-8829-7dbc545f31bb-kube-api-access-rmgz4\") pod \"ceilometer-0\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " pod="openstack/ceilometer-0" Jan 22 15:48:27 crc kubenswrapper[4825]: I0122 15:48:27.857030 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.421896 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.424390 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac2aec93-3b95-48dd-8799-49e89387ab25","Type":"ContainerStarted","Data":"e133015b53b04a01fcce8d1405d04bb98b11b2a147b6e86678430dc5a6a12d67"} Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.424448 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac2aec93-3b95-48dd-8799-49e89387ab25","Type":"ContainerStarted","Data":"78415d50f989a95bec40049dd178325eafd43047459f35e3b19ec0c2ee2ee67b"} Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.429500 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"151dca33-da19-4a32-948e-ec8bc6d14829","Type":"ContainerStarted","Data":"2b341fd24fd5b93eec9317a41b3ff1f42399c4661ad4696cfe47fb9d9d1ed858"} Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.429545 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"151dca33-da19-4a32-948e-ec8bc6d14829","Type":"ContainerStarted","Data":"1faa55bec10c51da5c75204840b4689111532368e498b2810fb00a38c5418fc8"} Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.432487 4825 generic.go:334] "Generic (PLEG): container finished" podID="952055c6-1b43-4621-9fd9-4078d8539301" containerID="dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92" exitCode=0 Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.432545 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952055c6-1b43-4621-9fd9-4078d8539301","Type":"ContainerDied","Data":"dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92"} Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.432587 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"952055c6-1b43-4621-9fd9-4078d8539301","Type":"ContainerDied","Data":"1f1eb43d2ab4ca02222e10ba5277307f6ca6eeb75b8e423b5efcb8ed7a86aae8"} Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.432583 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.432609 4825 scope.go:117] "RemoveContainer" containerID="dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.458274 4825 scope.go:117] "RemoveContainer" containerID="db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.503006 4825 scope.go:117] "RemoveContainer" containerID="dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.503091 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:48:28 crc kubenswrapper[4825]: E0122 15:48:28.503655 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92\": container with ID starting with dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92 not found: ID does not exist" containerID="dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.503713 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92"} err="failed to get container status \"dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92\": rpc error: code = NotFound desc = could not find container \"dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92\": container with ID starting with dc6dccd58d8e3770162d09072d8fe87c08bc3db2385c359ca2f2efa2b955cc92 not found: ID does not exist" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.503752 4825 scope.go:117] "RemoveContainer" containerID="db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f" Jan 22 15:48:28 crc kubenswrapper[4825]: E0122 15:48:28.505267 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f\": container with ID starting with db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f not found: ID does not exist" containerID="db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.505319 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f"} err="failed to get container status \"db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f\": rpc error: code = NotFound desc = could not find container \"db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f\": container with ID starting with db580a11954774ed92bac9da9778a2ed53d03c5579f431a24bbcae0b6e3a033f not found: ID does not exist" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.513434 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.513411081 podStartE2EDuration="2.513411081s" podCreationTimestamp="2026-01-22 15:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:28.471708857 +0000 UTC m=+1455.233235777" watchObservedRunningTime="2026-01-22 15:48:28.513411081 +0000 UTC m=+1455.274937991" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.533292 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5332631 podStartE2EDuration="2.5332631s" podCreationTimestamp="2026-01-22 15:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:28.506461445 +0000 UTC m=+1455.267988355" watchObservedRunningTime="2026-01-22 15:48:28.5332631 +0000 UTC m=+1455.294790010" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.557635 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-combined-ca-bundle\") pod \"952055c6-1b43-4621-9fd9-4078d8539301\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.557733 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qqbv\" (UniqueName: \"kubernetes.io/projected/952055c6-1b43-4621-9fd9-4078d8539301-kube-api-access-7qqbv\") pod \"952055c6-1b43-4621-9fd9-4078d8539301\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.557813 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952055c6-1b43-4621-9fd9-4078d8539301-logs\") pod \"952055c6-1b43-4621-9fd9-4078d8539301\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.557925 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-nova-metadata-tls-certs\") pod \"952055c6-1b43-4621-9fd9-4078d8539301\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.558017 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-config-data\") pod \"952055c6-1b43-4621-9fd9-4078d8539301\" (UID: \"952055c6-1b43-4621-9fd9-4078d8539301\") " Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.559152 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952055c6-1b43-4621-9fd9-4078d8539301-logs" (OuterVolumeSpecName: "logs") pod "952055c6-1b43-4621-9fd9-4078d8539301" (UID: "952055c6-1b43-4621-9fd9-4078d8539301"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.576248 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952055c6-1b43-4621-9fd9-4078d8539301-kube-api-access-7qqbv" (OuterVolumeSpecName: "kube-api-access-7qqbv") pod "952055c6-1b43-4621-9fd9-4078d8539301" (UID: "952055c6-1b43-4621-9fd9-4078d8539301"). InnerVolumeSpecName "kube-api-access-7qqbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.602820 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "952055c6-1b43-4621-9fd9-4078d8539301" (UID: "952055c6-1b43-4621-9fd9-4078d8539301"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.618329 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-config-data" (OuterVolumeSpecName: "config-data") pod "952055c6-1b43-4621-9fd9-4078d8539301" (UID: "952055c6-1b43-4621-9fd9-4078d8539301"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.662236 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.662273 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qqbv\" (UniqueName: \"kubernetes.io/projected/952055c6-1b43-4621-9fd9-4078d8539301-kube-api-access-7qqbv\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.662288 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/952055c6-1b43-4621-9fd9-4078d8539301-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.662303 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.666098 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "952055c6-1b43-4621-9fd9-4078d8539301" (UID: "952055c6-1b43-4621-9fd9-4078d8539301"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.764912 4825 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/952055c6-1b43-4621-9fd9-4078d8539301-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.772641 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.788259 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.801991 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:48:28 crc kubenswrapper[4825]: E0122 15:48:28.802573 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-metadata" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.802595 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-metadata" Jan 22 15:48:28 crc kubenswrapper[4825]: E0122 15:48:28.802623 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-log" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.802630 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-log" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.802881 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-log" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.802919 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="952055c6-1b43-4621-9fd9-4078d8539301" containerName="nova-metadata-metadata" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.804433 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.810768 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.811011 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.817600 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.894505 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.894571 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63e3f9f-d983-4643-b3be-804cb489ac96-logs\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.894696 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.894754 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw492\" (UniqueName: \"kubernetes.io/projected/e63e3f9f-d983-4643-b3be-804cb489ac96-kube-api-access-dw492\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.894963 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-config-data\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.997265 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-config-data\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.997360 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.997386 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63e3f9f-d983-4643-b3be-804cb489ac96-logs\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.997443 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.997472 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw492\" (UniqueName: \"kubernetes.io/projected/e63e3f9f-d983-4643-b3be-804cb489ac96-kube-api-access-dw492\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:28 crc kubenswrapper[4825]: I0122 15:48:28.998652 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e63e3f9f-d983-4643-b3be-804cb489ac96-logs\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.001643 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.002424 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-config-data\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.002750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e63e3f9f-d983-4643-b3be-804cb489ac96-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.015284 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw492\" (UniqueName: \"kubernetes.io/projected/e63e3f9f-d983-4643-b3be-804cb489ac96-kube-api-access-dw492\") pod \"nova-metadata-0\" (UID: \"e63e3f9f-d983-4643-b3be-804cb489ac96\") " pod="openstack/nova-metadata-0" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.175064 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.449650 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerStarted","Data":"d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d"} Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.450057 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerStarted","Data":"d1edbb4c60453e86eb3d5fddc6145178f9caba108a69cc03d8e624ffc73f3357"} Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.529541 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952055c6-1b43-4621-9fd9-4078d8539301" path="/var/lib/kubelet/pods/952055c6-1b43-4621-9fd9-4078d8539301/volumes" Jan 22 15:48:29 crc kubenswrapper[4825]: I0122 15:48:29.644513 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 15:48:30 crc kubenswrapper[4825]: I0122 15:48:30.461847 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e63e3f9f-d983-4643-b3be-804cb489ac96","Type":"ContainerStarted","Data":"fd6200197d6bf2e045ae62d1847f851f0679c09392c387b031e98bf6d055bd51"} Jan 22 15:48:30 crc kubenswrapper[4825]: I0122 15:48:30.462217 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e63e3f9f-d983-4643-b3be-804cb489ac96","Type":"ContainerStarted","Data":"277ea8619794a39d93dd0e310f55c3761a5026d878d34af35c8fb3aefee1b634"} Jan 22 15:48:30 crc kubenswrapper[4825]: I0122 15:48:30.462233 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e63e3f9f-d983-4643-b3be-804cb489ac96","Type":"ContainerStarted","Data":"27b919663bbeb6c7c4c9fc21576a2bc4d0f7e9f215aae94d8ce8a15cd5037841"} Jan 22 15:48:30 crc kubenswrapper[4825]: I0122 15:48:30.464301 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerStarted","Data":"e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018"} Jan 22 15:48:30 crc kubenswrapper[4825]: I0122 15:48:30.491921 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.491903714 podStartE2EDuration="2.491903714s" podCreationTimestamp="2026-01-22 15:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:48:30.486997536 +0000 UTC m=+1457.248524446" watchObservedRunningTime="2026-01-22 15:48:30.491903714 +0000 UTC m=+1457.253430614" Jan 22 15:48:31 crc kubenswrapper[4825]: I0122 15:48:31.475966 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerStarted","Data":"c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f"} Jan 22 15:48:32 crc kubenswrapper[4825]: I0122 15:48:32.167148 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 15:48:33 crc kubenswrapper[4825]: I0122 15:48:33.502784 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerStarted","Data":"30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459"} Jan 22 15:48:33 crc kubenswrapper[4825]: I0122 15:48:33.503525 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:48:33 crc kubenswrapper[4825]: I0122 15:48:33.550765 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.188576873 podStartE2EDuration="6.550748697s" podCreationTimestamp="2026-01-22 15:48:27 +0000 UTC" firstStartedPulling="2026-01-22 15:48:28.502901995 +0000 UTC m=+1455.264428905" lastFinishedPulling="2026-01-22 15:48:32.865073819 +0000 UTC m=+1459.626600729" observedRunningTime="2026-01-22 15:48:33.530159108 +0000 UTC m=+1460.291686018" watchObservedRunningTime="2026-01-22 15:48:33.550748697 +0000 UTC m=+1460.312275607" Jan 22 15:48:34 crc kubenswrapper[4825]: I0122 15:48:34.176510 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:48:34 crc kubenswrapper[4825]: I0122 15:48:34.176582 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 15:48:36 crc kubenswrapper[4825]: I0122 15:48:36.807514 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 15:48:36 crc kubenswrapper[4825]: I0122 15:48:36.808152 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 15:48:37 crc kubenswrapper[4825]: I0122 15:48:37.167515 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 15:48:37 crc kubenswrapper[4825]: I0122 15:48:37.205245 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 15:48:37 crc kubenswrapper[4825]: I0122 15:48:37.581338 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 15:48:37 crc kubenswrapper[4825]: I0122 15:48:37.831309 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="151dca33-da19-4a32-948e-ec8bc6d14829" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 15:48:37 crc kubenswrapper[4825]: I0122 15:48:37.831309 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="151dca33-da19-4a32-948e-ec8bc6d14829" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.240:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:48:39 crc kubenswrapper[4825]: I0122 15:48:39.177113 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 15:48:39 crc kubenswrapper[4825]: I0122 15:48:39.177419 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 15:48:40 crc kubenswrapper[4825]: I0122 15:48:40.192155 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e63e3f9f-d983-4643-b3be-804cb489ac96" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.243:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:48:40 crc kubenswrapper[4825]: I0122 15:48:40.192173 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e63e3f9f-d983-4643-b3be-804cb489ac96" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.243:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:48:46 crc kubenswrapper[4825]: I0122 15:48:46.814439 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 15:48:46 crc kubenswrapper[4825]: I0122 15:48:46.816441 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 15:48:46 crc kubenswrapper[4825]: I0122 15:48:46.821422 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 15:48:46 crc kubenswrapper[4825]: I0122 15:48:46.822886 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 15:48:47 crc kubenswrapper[4825]: I0122 15:48:47.707204 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 15:48:47 crc kubenswrapper[4825]: I0122 15:48:47.729787 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 15:48:49 crc kubenswrapper[4825]: I0122 15:48:49.184908 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 15:48:49 crc kubenswrapper[4825]: I0122 15:48:49.189063 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 15:48:49 crc kubenswrapper[4825]: I0122 15:48:49.192166 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 15:48:49 crc kubenswrapper[4825]: I0122 15:48:49.821467 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 15:48:57 crc kubenswrapper[4825]: I0122 15:48:57.865140 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 15:49:09 crc kubenswrapper[4825]: I0122 15:49:09.877258 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-cshtw"] Jan 22 15:49:09 crc kubenswrapper[4825]: I0122 15:49:09.983326 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-cshtw"] Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.073721 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-f48qr"] Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.075553 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.077405 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.081492 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-config-data\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.081568 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-scripts\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.081683 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2k4\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-kube-api-access-5r2k4\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.081730 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-combined-ca-bundle\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.081769 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-certs\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.084871 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-f48qr"] Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.183410 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-config-data\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.183468 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-scripts\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.183548 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2k4\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-kube-api-access-5r2k4\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.183573 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-combined-ca-bundle\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.183596 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-certs\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.191960 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-scripts\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.194134 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-config-data\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.195596 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-combined-ca-bundle\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.197596 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-certs\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.201950 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2k4\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-kube-api-access-5r2k4\") pod \"cloudkitty-db-sync-f48qr\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:10 crc kubenswrapper[4825]: I0122 15:49:10.403056 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:11 crc kubenswrapper[4825]: I0122 15:49:11.049703 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-f48qr"] Jan 22 15:49:11 crc kubenswrapper[4825]: W0122 15:49:11.077374 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8906ca3b_553a_4473_8b78_ab2de85f25a8.slice/crio-40caea94a7c3571fa524dbaddb84354a0fb51fd5492d19ea29aaba4d848516d0 WatchSource:0}: Error finding container 40caea94a7c3571fa524dbaddb84354a0fb51fd5492d19ea29aaba4d848516d0: Status 404 returned error can't find the container with id 40caea94a7c3571fa524dbaddb84354a0fb51fd5492d19ea29aaba4d848516d0 Jan 22 15:49:11 crc kubenswrapper[4825]: I0122 15:49:11.530943 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e" path="/var/lib/kubelet/pods/52bc05ea-d642-4fca-b8ce-fa0d9a2bf38e/volumes" Jan 22 15:49:11 crc kubenswrapper[4825]: I0122 15:49:11.908588 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f48qr" event={"ID":"8906ca3b-553a-4473-8b78-ab2de85f25a8","Type":"ContainerStarted","Data":"8e5bddf822a2395fb61690169e8fc9ed8285244f723264be49fed4d886e79c50"} Jan 22 15:49:11 crc kubenswrapper[4825]: I0122 15:49:11.908956 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f48qr" event={"ID":"8906ca3b-553a-4473-8b78-ab2de85f25a8","Type":"ContainerStarted","Data":"40caea94a7c3571fa524dbaddb84354a0fb51fd5492d19ea29aaba4d848516d0"} Jan 22 15:49:11 crc kubenswrapper[4825]: I0122 15:49:11.946839 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-f48qr" podStartSLOduration=1.6622558870000002 podStartE2EDuration="1.946728828s" podCreationTimestamp="2026-01-22 15:49:10 +0000 UTC" firstStartedPulling="2026-01-22 15:49:11.079579279 +0000 UTC m=+1497.841106189" lastFinishedPulling="2026-01-22 15:49:11.3640522 +0000 UTC m=+1498.125579130" observedRunningTime="2026-01-22 15:49:11.933170846 +0000 UTC m=+1498.694697756" watchObservedRunningTime="2026-01-22 15:49:11.946728828 +0000 UTC m=+1498.708255738" Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.013927 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.014252 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-central-agent" containerID="cri-o://d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d" gracePeriod=30 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.014415 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="proxy-httpd" containerID="cri-o://30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459" gracePeriod=30 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.014483 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-notification-agent" containerID="cri-o://e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018" gracePeriod=30 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.014620 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="sg-core" containerID="cri-o://c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f" gracePeriod=30 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.732010 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.963519 4825 generic.go:334] "Generic (PLEG): container finished" podID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerID="30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459" exitCode=0 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.963550 4825 generic.go:334] "Generic (PLEG): container finished" podID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerID="c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f" exitCode=2 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.963559 4825 generic.go:334] "Generic (PLEG): container finished" podID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerID="d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d" exitCode=0 Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.964373 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerDied","Data":"30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459"} Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.964411 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerDied","Data":"c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f"} Jan 22 15:49:12 crc kubenswrapper[4825]: I0122 15:49:12.964423 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerDied","Data":"d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d"} Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.518460 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.675609 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-sg-core-conf-yaml\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.675723 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmgz4\" (UniqueName: \"kubernetes.io/projected/b4abeda5-c4cb-4684-8829-7dbc545f31bb-kube-api-access-rmgz4\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.675855 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-scripts\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.675914 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-ceilometer-tls-certs\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.675931 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-combined-ca-bundle\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.676111 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-config-data\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.676174 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-log-httpd\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.676248 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-run-httpd\") pod \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\" (UID: \"b4abeda5-c4cb-4684-8829-7dbc545f31bb\") " Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.676918 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.677379 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.678056 4825 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.678073 4825 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4abeda5-c4cb-4684-8829-7dbc545f31bb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.702304 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4abeda5-c4cb-4684-8829-7dbc545f31bb-kube-api-access-rmgz4" (OuterVolumeSpecName: "kube-api-access-rmgz4") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "kube-api-access-rmgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.713815 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-scripts" (OuterVolumeSpecName: "scripts") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.741883 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.763014 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.780902 4825 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.780952 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmgz4\" (UniqueName: \"kubernetes.io/projected/b4abeda5-c4cb-4684-8829-7dbc545f31bb-kube-api-access-rmgz4\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.780969 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.781009 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.982237 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.989240 4825 generic.go:334] "Generic (PLEG): container finished" podID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerID="e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018" exitCode=0 Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.989293 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerDied","Data":"e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018"} Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.989324 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4abeda5-c4cb-4684-8829-7dbc545f31bb","Type":"ContainerDied","Data":"d1edbb4c60453e86eb3d5fddc6145178f9caba108a69cc03d8e624ffc73f3357"} Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.989337 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:49:13 crc kubenswrapper[4825]: I0122 15:49:13.989345 4825 scope.go:117] "RemoveContainer" containerID="30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.022198 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-config-data" (OuterVolumeSpecName: "config-data") pod "b4abeda5-c4cb-4684-8829-7dbc545f31bb" (UID: "b4abeda5-c4cb-4684-8829-7dbc545f31bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.054178 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.059091 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.059119 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abeda5-c4cb-4684-8829-7dbc545f31bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.102437 4825 scope.go:117] "RemoveContainer" containerID="c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.131621 4825 scope.go:117] "RemoveContainer" containerID="e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.155313 4825 scope.go:117] "RemoveContainer" containerID="d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.195798 4825 scope.go:117] "RemoveContainer" containerID="30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.196364 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459\": container with ID starting with 30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459 not found: ID does not exist" containerID="30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.196421 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459"} err="failed to get container status \"30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459\": rpc error: code = NotFound desc = could not find container \"30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459\": container with ID starting with 30e6483a79918cbecd6a64be6b20c3f270798dfdc492d642d8da176a77ee5459 not found: ID does not exist" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.196451 4825 scope.go:117] "RemoveContainer" containerID="c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.197097 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f\": container with ID starting with c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f not found: ID does not exist" containerID="c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.197141 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f"} err="failed to get container status \"c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f\": rpc error: code = NotFound desc = could not find container \"c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f\": container with ID starting with c4595b4cecc31e567aff17d2f3061b3ae53c05627898e4aed99f9cd0393a137f not found: ID does not exist" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.197171 4825 scope.go:117] "RemoveContainer" containerID="e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.197511 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018\": container with ID starting with e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018 not found: ID does not exist" containerID="e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.197540 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018"} err="failed to get container status \"e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018\": rpc error: code = NotFound desc = could not find container \"e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018\": container with ID starting with e6e3a4fe3652b91aad4cc6c9650108513d82cbbfb4e3f66a8da96b202795b018 not found: ID does not exist" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.197556 4825 scope.go:117] "RemoveContainer" containerID="d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.198082 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d\": container with ID starting with d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d not found: ID does not exist" containerID="d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.198106 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d"} err="failed to get container status \"d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d\": rpc error: code = NotFound desc = could not find container \"d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d\": container with ID starting with d64e739171727e84f312fd8495132a2dace9f7a63cfca7b5a458c8795b23921d not found: ID does not exist" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.346231 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.392036 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.477261 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.477897 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="sg-core" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.477925 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="sg-core" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.477945 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-central-agent" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.477954 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-central-agent" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.478026 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="proxy-httpd" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.478035 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="proxy-httpd" Jan 22 15:49:14 crc kubenswrapper[4825]: E0122 15:49:14.478060 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-notification-agent" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.478068 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-notification-agent" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.478358 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-central-agent" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.478382 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="ceilometer-notification-agent" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.478396 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="sg-core" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.478410 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" containerName="proxy-httpd" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.487667 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.491787 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.491925 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.492131 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.492330 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.681953 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mk9w\" (UniqueName: \"kubernetes.io/projected/3a2eadc4-a314-4c54-bdce-455b3697e4ad-kube-api-access-4mk9w\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.682358 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.682388 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.682590 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.682638 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-config-data\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.682740 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a2eadc4-a314-4c54-bdce-455b3697e4ad-run-httpd\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.684105 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-scripts\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.684181 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a2eadc4-a314-4c54-bdce-455b3697e4ad-log-httpd\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785650 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785711 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-config-data\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785750 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a2eadc4-a314-4c54-bdce-455b3697e4ad-run-httpd\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785783 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-scripts\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785822 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a2eadc4-a314-4c54-bdce-455b3697e4ad-log-httpd\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785917 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mk9w\" (UniqueName: \"kubernetes.io/projected/3a2eadc4-a314-4c54-bdce-455b3697e4ad-kube-api-access-4mk9w\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785942 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.785968 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.787045 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a2eadc4-a314-4c54-bdce-455b3697e4ad-run-httpd\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.787041 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a2eadc4-a314-4c54-bdce-455b3697e4ad-log-httpd\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.793154 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.793153 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-scripts\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.793282 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.794852 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.798159 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2eadc4-a314-4c54-bdce-455b3697e4ad-config-data\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:14 crc kubenswrapper[4825]: I0122 15:49:14.826490 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mk9w\" (UniqueName: \"kubernetes.io/projected/3a2eadc4-a314-4c54-bdce-455b3697e4ad-kube-api-access-4mk9w\") pod \"ceilometer-0\" (UID: \"3a2eadc4-a314-4c54-bdce-455b3697e4ad\") " pod="openstack/ceilometer-0" Jan 22 15:49:15 crc kubenswrapper[4825]: I0122 15:49:15.106671 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 15:49:15 crc kubenswrapper[4825]: I0122 15:49:15.539574 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4abeda5-c4cb-4684-8829-7dbc545f31bb" path="/var/lib/kubelet/pods/b4abeda5-c4cb-4684-8829-7dbc545f31bb/volumes" Jan 22 15:49:15 crc kubenswrapper[4825]: I0122 15:49:15.732517 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 15:49:16 crc kubenswrapper[4825]: I0122 15:49:16.015973 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a2eadc4-a314-4c54-bdce-455b3697e4ad","Type":"ContainerStarted","Data":"7d9b515d1494087b6bcd6fc04ba098267cf17511d280f89a36bb5d2a91a4caec"} Jan 22 15:49:16 crc kubenswrapper[4825]: I0122 15:49:16.023712 4825 generic.go:334] "Generic (PLEG): container finished" podID="8906ca3b-553a-4473-8b78-ab2de85f25a8" containerID="8e5bddf822a2395fb61690169e8fc9ed8285244f723264be49fed4d886e79c50" exitCode=0 Jan 22 15:49:16 crc kubenswrapper[4825]: I0122 15:49:16.023765 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f48qr" event={"ID":"8906ca3b-553a-4473-8b78-ab2de85f25a8","Type":"ContainerDied","Data":"8e5bddf822a2395fb61690169e8fc9ed8285244f723264be49fed4d886e79c50"} Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.597495 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.697569 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-combined-ca-bundle\") pod \"8906ca3b-553a-4473-8b78-ab2de85f25a8\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.698037 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-config-data\") pod \"8906ca3b-553a-4473-8b78-ab2de85f25a8\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.698170 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-scripts\") pod \"8906ca3b-553a-4473-8b78-ab2de85f25a8\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.698228 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r2k4\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-kube-api-access-5r2k4\") pod \"8906ca3b-553a-4473-8b78-ab2de85f25a8\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.698392 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-certs\") pod \"8906ca3b-553a-4473-8b78-ab2de85f25a8\" (UID: \"8906ca3b-553a-4473-8b78-ab2de85f25a8\") " Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.704658 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-certs" (OuterVolumeSpecName: "certs") pod "8906ca3b-553a-4473-8b78-ab2de85f25a8" (UID: "8906ca3b-553a-4473-8b78-ab2de85f25a8"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.704765 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-kube-api-access-5r2k4" (OuterVolumeSpecName: "kube-api-access-5r2k4") pod "8906ca3b-553a-4473-8b78-ab2de85f25a8" (UID: "8906ca3b-553a-4473-8b78-ab2de85f25a8"). InnerVolumeSpecName "kube-api-access-5r2k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.710161 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-scripts" (OuterVolumeSpecName: "scripts") pod "8906ca3b-553a-4473-8b78-ab2de85f25a8" (UID: "8906ca3b-553a-4473-8b78-ab2de85f25a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.756307 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-config-data" (OuterVolumeSpecName: "config-data") pod "8906ca3b-553a-4473-8b78-ab2de85f25a8" (UID: "8906ca3b-553a-4473-8b78-ab2de85f25a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.762245 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8906ca3b-553a-4473-8b78-ab2de85f25a8" (UID: "8906ca3b-553a-4473-8b78-ab2de85f25a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.800286 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.800333 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r2k4\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-kube-api-access-5r2k4\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.800349 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8906ca3b-553a-4473-8b78-ab2de85f25a8-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.800363 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:17 crc kubenswrapper[4825]: I0122 15:49:17.800375 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8906ca3b-553a-4473-8b78-ab2de85f25a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.080060 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f48qr" event={"ID":"8906ca3b-553a-4473-8b78-ab2de85f25a8","Type":"ContainerDied","Data":"40caea94a7c3571fa524dbaddb84354a0fb51fd5492d19ea29aaba4d848516d0"} Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.080104 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40caea94a7c3571fa524dbaddb84354a0fb51fd5492d19ea29aaba4d848516d0" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.080206 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f48qr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.222604 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-98v87"] Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.235114 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-98v87"] Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.311245 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-bsgbr"] Jan 22 15:49:18 crc kubenswrapper[4825]: E0122 15:49:18.311937 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8906ca3b-553a-4473-8b78-ab2de85f25a8" containerName="cloudkitty-db-sync" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.311949 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8906ca3b-553a-4473-8b78-ab2de85f25a8" containerName="cloudkitty-db-sync" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.312169 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="8906ca3b-553a-4473-8b78-ab2de85f25a8" containerName="cloudkitty-db-sync" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.312934 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.315617 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.340375 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-bsgbr"] Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.455451 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-scripts\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.456152 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkhn\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-kube-api-access-9dkhn\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.456233 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-config-data\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.456261 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-certs\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.456360 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-combined-ca-bundle\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.559592 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkhn\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-kube-api-access-9dkhn\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.559685 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-config-data\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.559707 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-certs\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.559779 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-combined-ca-bundle\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.559883 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-scripts\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.564261 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-combined-ca-bundle\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.564629 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-config-data\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.564964 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-scripts\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.565296 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-certs\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.582491 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkhn\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-kube-api-access-9dkhn\") pod \"cloudkitty-storageinit-bsgbr\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.659705 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="rabbitmq" containerID="cri-o://020f01fa01c7531efac312a1a4ee10db30b605df6436c83ee37b61635da0a2e3" gracePeriod=604795 Jan 22 15:49:18 crc kubenswrapper[4825]: I0122 15:49:18.677823 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:19 crc kubenswrapper[4825]: I0122 15:49:19.508305 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="rabbitmq" containerID="cri-o://fe630163da9699c6ae7767c15986a0a522ba8248bdbb3d16653256e23ac471e7" gracePeriod=604795 Jan 22 15:49:19 crc kubenswrapper[4825]: I0122 15:49:19.534568 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287a38fa-f643-4202-8c80-73080c77388c" path="/var/lib/kubelet/pods/287a38fa-f643-4202-8c80-73080c77388c/volumes" Jan 22 15:49:20 crc kubenswrapper[4825]: I0122 15:49:20.002848 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.112:5671: connect: connection refused" Jan 22 15:49:20 crc kubenswrapper[4825]: I0122 15:49:20.416613 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.113:5671: connect: connection refused" Jan 22 15:49:20 crc kubenswrapper[4825]: I0122 15:49:20.967157 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-bsgbr"] Jan 22 15:49:21 crc kubenswrapper[4825]: I0122 15:49:21.123206 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a2eadc4-a314-4c54-bdce-455b3697e4ad","Type":"ContainerStarted","Data":"cbcc9b9e3334216b49b58e60223efd88e622e484afd6ce9b08d57d42d9f4d07b"} Jan 22 15:49:21 crc kubenswrapper[4825]: I0122 15:49:21.124511 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-bsgbr" event={"ID":"d75816f7-ded7-47ef-be7f-f471a696cde4","Type":"ContainerStarted","Data":"17e12d03bffe08c6631e688b98c23ba3c7bc8c3c775cd13fa83b7914b6ac76eb"} Jan 22 15:49:22 crc kubenswrapper[4825]: I0122 15:49:22.137239 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-bsgbr" event={"ID":"d75816f7-ded7-47ef-be7f-f471a696cde4","Type":"ContainerStarted","Data":"63224e430271b319874dcc2c67bbb89dada81e109fe23cda6c832e19e6e3704f"} Jan 22 15:49:22 crc kubenswrapper[4825]: I0122 15:49:22.141373 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a2eadc4-a314-4c54-bdce-455b3697e4ad","Type":"ContainerStarted","Data":"3f90e9fde0e4d6f9cc64baf88837c3a5bc4c7eaa567e0dd3d2057129f97a50c9"} Jan 22 15:49:22 crc kubenswrapper[4825]: I0122 15:49:22.141623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a2eadc4-a314-4c54-bdce-455b3697e4ad","Type":"ContainerStarted","Data":"5561b2587e67126ea2fe6f573a1a7ebbc1cfdcaa70bb2872692f4a7a3e821f75"} Jan 22 15:49:22 crc kubenswrapper[4825]: I0122 15:49:22.157169 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-bsgbr" podStartSLOduration=4.157130041 podStartE2EDuration="4.157130041s" podCreationTimestamp="2026-01-22 15:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:49:22.154316402 +0000 UTC m=+1508.915843322" watchObservedRunningTime="2026-01-22 15:49:22.157130041 +0000 UTC m=+1508.918656951" Jan 22 15:49:23 crc kubenswrapper[4825]: I0122 15:49:23.153582 4825 generic.go:334] "Generic (PLEG): container finished" podID="d75816f7-ded7-47ef-be7f-f471a696cde4" containerID="63224e430271b319874dcc2c67bbb89dada81e109fe23cda6c832e19e6e3704f" exitCode=0 Jan 22 15:49:23 crc kubenswrapper[4825]: I0122 15:49:23.153915 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-bsgbr" event={"ID":"d75816f7-ded7-47ef-be7f-f471a696cde4","Type":"ContainerDied","Data":"63224e430271b319874dcc2c67bbb89dada81e109fe23cda6c832e19e6e3704f"} Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.707655 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.829271 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dkhn\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-kube-api-access-9dkhn\") pod \"d75816f7-ded7-47ef-be7f-f471a696cde4\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.829425 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-certs\") pod \"d75816f7-ded7-47ef-be7f-f471a696cde4\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.829622 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-scripts\") pod \"d75816f7-ded7-47ef-be7f-f471a696cde4\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.829768 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-config-data\") pod \"d75816f7-ded7-47ef-be7f-f471a696cde4\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.830142 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-combined-ca-bundle\") pod \"d75816f7-ded7-47ef-be7f-f471a696cde4\" (UID: \"d75816f7-ded7-47ef-be7f-f471a696cde4\") " Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.835637 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-kube-api-access-9dkhn" (OuterVolumeSpecName: "kube-api-access-9dkhn") pod "d75816f7-ded7-47ef-be7f-f471a696cde4" (UID: "d75816f7-ded7-47ef-be7f-f471a696cde4"). InnerVolumeSpecName "kube-api-access-9dkhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.836172 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-scripts" (OuterVolumeSpecName: "scripts") pod "d75816f7-ded7-47ef-be7f-f471a696cde4" (UID: "d75816f7-ded7-47ef-be7f-f471a696cde4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.837029 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-certs" (OuterVolumeSpecName: "certs") pod "d75816f7-ded7-47ef-be7f-f471a696cde4" (UID: "d75816f7-ded7-47ef-be7f-f471a696cde4"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.861881 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d75816f7-ded7-47ef-be7f-f471a696cde4" (UID: "d75816f7-ded7-47ef-be7f-f471a696cde4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.873145 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-config-data" (OuterVolumeSpecName: "config-data") pod "d75816f7-ded7-47ef-be7f-f471a696cde4" (UID: "d75816f7-ded7-47ef-be7f-f471a696cde4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.933155 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.933185 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.933194 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d75816f7-ded7-47ef-be7f-f471a696cde4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.933205 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dkhn\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-kube-api-access-9dkhn\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:24 crc kubenswrapper[4825]: I0122 15:49:24.933213 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d75816f7-ded7-47ef-be7f-f471a696cde4-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.181499 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-bsgbr" event={"ID":"d75816f7-ded7-47ef-be7f-f471a696cde4","Type":"ContainerDied","Data":"17e12d03bffe08c6631e688b98c23ba3c7bc8c3c775cd13fa83b7914b6ac76eb"} Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.181718 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e12d03bffe08c6631e688b98c23ba3c7bc8c3c775cd13fa83b7914b6ac76eb" Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.181580 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-bsgbr" Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.300509 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.300702 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="af574794-cc05-40fd-8dce-0497c37a9888" containerName="cloudkitty-proc" containerID="cri-o://03f6480418f1b85fd326581f6478d8ae49d2bbe95c8fdab8b1888e404fc399a6" gracePeriod=30 Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.309752 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.311327 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api-log" containerID="cri-o://0e7537ea16924c190c2747c11144a108589391b9e8c36671010dae31af419e92" gracePeriod=30 Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.311374 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" containerID="cri-o://15e44a45d041b0b1e8eaa16c3a8109a4f9761c79112275e5232fd874d578073a" gracePeriod=30 Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.337965 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.202:8889/healthcheck\": EOF" Jan 22 15:49:25 crc kubenswrapper[4825]: I0122 15:49:25.338635 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.202:8889/healthcheck\": EOF" Jan 22 15:49:26 crc kubenswrapper[4825]: E0122 15:49:26.799166 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45e6f05d_8a80_49ca_add6_e8c41572b664.slice/crio-conmon-020f01fa01c7531efac312a1a4ee10db30b605df6436c83ee37b61635da0a2e3.scope\": RecentStats: unable to find data in memory cache]" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.219873 4825 generic.go:334] "Generic (PLEG): container finished" podID="af574794-cc05-40fd-8dce-0497c37a9888" containerID="03f6480418f1b85fd326581f6478d8ae49d2bbe95c8fdab8b1888e404fc399a6" exitCode=0 Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.220178 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"af574794-cc05-40fd-8dce-0497c37a9888","Type":"ContainerDied","Data":"03f6480418f1b85fd326581f6478d8ae49d2bbe95c8fdab8b1888e404fc399a6"} Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.239946 4825 generic.go:334] "Generic (PLEG): container finished" podID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerID="020f01fa01c7531efac312a1a4ee10db30b605df6436c83ee37b61635da0a2e3" exitCode=0 Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.240050 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45e6f05d-8a80-49ca-add6-e8c41572b664","Type":"ContainerDied","Data":"020f01fa01c7531efac312a1a4ee10db30b605df6436c83ee37b61635da0a2e3"} Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.252890 4825 generic.go:334] "Generic (PLEG): container finished" podID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerID="0e7537ea16924c190c2747c11144a108589391b9e8c36671010dae31af419e92" exitCode=143 Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.252939 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa","Type":"ContainerDied","Data":"0e7537ea16924c190c2747c11144a108589391b9e8c36671010dae31af419e92"} Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.467533 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.758319 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-plugins-conf\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.758400 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-confd\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.758467 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45e6f05d-8a80-49ca-add6-e8c41572b664-pod-info\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.758498 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bps94\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-kube-api-access-bps94\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.758518 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-config-data\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759251 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759253 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-plugins\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759332 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-server-conf\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759380 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45e6f05d-8a80-49ca-add6-e8c41572b664-erlang-cookie-secret\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759408 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-tls\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.759431 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-erlang-cookie\") pod \"45e6f05d-8a80-49ca-add6-e8c41572b664\" (UID: \"45e6f05d-8a80-49ca-add6-e8c41572b664\") " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.760071 4825 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.760888 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.760939 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.767621 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e6f05d-8a80-49ca-add6-e8c41572b664-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.769002 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-kube-api-access-bps94" (OuterVolumeSpecName: "kube-api-access-bps94") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "kube-api-access-bps94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.774851 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/45e6f05d-8a80-49ca-add6-e8c41572b664-pod-info" (OuterVolumeSpecName: "pod-info") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.787569 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.826185 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-config-data" (OuterVolumeSpecName: "config-data") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.854013 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15" (OuterVolumeSpecName: "persistence") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "pvc-95940e65-101e-4442-bd35-6faf7fdb6d15". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870552 4825 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45e6f05d-8a80-49ca-add6-e8c41572b664-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870591 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bps94\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-kube-api-access-bps94\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870604 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870647 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") on node \"crc\" " Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870659 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870670 4825 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45e6f05d-8a80-49ca-add6-e8c41572b664-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870683 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.870693 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.878404 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-server-conf" (OuterVolumeSpecName: "server-conf") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.915144 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.915605 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-95940e65-101e-4442-bd35-6faf7fdb6d15" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15") on node "crc" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.974549 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:27 crc kubenswrapper[4825]: I0122 15:49:27.974800 4825 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45e6f05d-8a80-49ca-add6-e8c41572b664-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.066515 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-kc9g7"] Jan 22 15:49:28 crc kubenswrapper[4825]: E0122 15:49:28.067644 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="setup-container" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.067673 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="setup-container" Jan 22 15:49:28 crc kubenswrapper[4825]: E0122 15:49:28.067739 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="rabbitmq" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.067748 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="rabbitmq" Jan 22 15:49:28 crc kubenswrapper[4825]: E0122 15:49:28.067766 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75816f7-ded7-47ef-be7f-f471a696cde4" containerName="cloudkitty-storageinit" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.067777 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75816f7-ded7-47ef-be7f-f471a696cde4" containerName="cloudkitty-storageinit" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.068376 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" containerName="rabbitmq" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.068403 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75816f7-ded7-47ef-be7f-f471a696cde4" containerName="cloudkitty-storageinit" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.143643 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.238752 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.309225 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.342483 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-kc9g7"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.372757 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"af574794-cc05-40fd-8dce-0497c37a9888","Type":"ContainerDied","Data":"15591dd85f7ac1cb34d4cc8eef6b481f8c2118e5a130b9a6f8fa76683eeaa7af"} Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.372823 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.372836 4825 scope.go:117] "RemoveContainer" containerID="03f6480418f1b85fd326581f6478d8ae49d2bbe95c8fdab8b1888e404fc399a6" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.376104 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "45e6f05d-8a80-49ca-add6-e8c41572b664" (UID: "45e6f05d-8a80-49ca-add6-e8c41572b664"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.384084 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-combined-ca-bundle\") pod \"af574794-cc05-40fd-8dce-0497c37a9888\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.384196 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-certs\") pod \"af574794-cc05-40fd-8dce-0497c37a9888\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.384323 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data-custom\") pod \"af574794-cc05-40fd-8dce-0497c37a9888\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.384402 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-scripts\") pod \"af574794-cc05-40fd-8dce-0497c37a9888\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.384432 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrvcv\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-kube-api-access-hrvcv\") pod \"af574794-cc05-40fd-8dce-0497c37a9888\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.384551 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data\") pod \"af574794-cc05-40fd-8dce-0497c37a9888\" (UID: \"af574794-cc05-40fd-8dce-0497c37a9888\") " Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385215 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385265 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385320 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-config\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385455 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpzn\" (UniqueName: \"kubernetes.io/projected/46958c79-89ff-48e9-bb5f-f4ab34575bea-kube-api-access-tkpzn\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385483 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.385532 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.391769 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45e6f05d-8a80-49ca-add6-e8c41572b664-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.398695 4825 generic.go:334] "Generic (PLEG): container finished" podID="215992ea-1abc-44d0-925b-799eb87bcc09" containerID="fe630163da9699c6ae7767c15986a0a522ba8248bdbb3d16653256e23ac471e7" exitCode=0 Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.398788 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"215992ea-1abc-44d0-925b-799eb87bcc09","Type":"ContainerDied","Data":"fe630163da9699c6ae7767c15986a0a522ba8248bdbb3d16653256e23ac471e7"} Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.399032 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-kube-api-access-hrvcv" (OuterVolumeSpecName: "kube-api-access-hrvcv") pod "af574794-cc05-40fd-8dce-0497c37a9888" (UID: "af574794-cc05-40fd-8dce-0497c37a9888"). InnerVolumeSpecName "kube-api-access-hrvcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.399277 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-certs" (OuterVolumeSpecName: "certs") pod "af574794-cc05-40fd-8dce-0497c37a9888" (UID: "af574794-cc05-40fd-8dce-0497c37a9888"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.400535 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-scripts" (OuterVolumeSpecName: "scripts") pod "af574794-cc05-40fd-8dce-0497c37a9888" (UID: "af574794-cc05-40fd-8dce-0497c37a9888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.410650 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a2eadc4-a314-4c54-bdce-455b3697e4ad","Type":"ContainerStarted","Data":"0f4640b29dbde81d26bee8b5ca792487d7b77f524d2770553eba558f45ce0064"} Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.411236 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "af574794-cc05-40fd-8dce-0497c37a9888" (UID: "af574794-cc05-40fd-8dce-0497c37a9888"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.411347 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.417749 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45e6f05d-8a80-49ca-add6-e8c41572b664","Type":"ContainerDied","Data":"e3ceee48e9c2e3884952150581d52336200386533246c6730f047e4b1fbdd2dc"} Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.418031 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.444966 4825 scope.go:117] "RemoveContainer" containerID="020f01fa01c7531efac312a1a4ee10db30b605df6436c83ee37b61635da0a2e3" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.449845 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.569599333 podStartE2EDuration="14.449823048s" podCreationTimestamp="2026-01-22 15:49:14 +0000 UTC" firstStartedPulling="2026-01-22 15:49:15.752843454 +0000 UTC m=+1502.514370364" lastFinishedPulling="2026-01-22 15:49:26.633067169 +0000 UTC m=+1513.394594079" observedRunningTime="2026-01-22 15:49:28.435612467 +0000 UTC m=+1515.197139367" watchObservedRunningTime="2026-01-22 15:49:28.449823048 +0000 UTC m=+1515.211349958" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.457806 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af574794-cc05-40fd-8dce-0497c37a9888" (UID: "af574794-cc05-40fd-8dce-0497c37a9888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.462406 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data" (OuterVolumeSpecName: "config-data") pod "af574794-cc05-40fd-8dce-0497c37a9888" (UID: "af574794-cc05-40fd-8dce-0497c37a9888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.484237 4825 scope.go:117] "RemoveContainer" containerID="c47a51e689e8e6934dbe0f9c52428877a4be4d4087bcabd749f2d7315b443e0c" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.493736 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.493818 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-config\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.493966 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.494014 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpzn\" (UniqueName: \"kubernetes.io/projected/46958c79-89ff-48e9-bb5f-f4ab34575bea-kube-api-access-tkpzn\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.494060 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.494125 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.494221 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.494591 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.495258 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.495311 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-config\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.495311 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.495778 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.496025 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.496089 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.497021 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.497047 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.497061 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.497071 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrvcv\" (UniqueName: \"kubernetes.io/projected/af574794-cc05-40fd-8dce-0497c37a9888-kube-api-access-hrvcv\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.497083 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af574794-cc05-40fd-8dce-0497c37a9888-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.497108 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.515082 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.522558 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpzn\" (UniqueName: \"kubernetes.io/projected/46958c79-89ff-48e9-bb5f-f4ab34575bea-kube-api-access-tkpzn\") pod \"dnsmasq-dns-dc7c944bf-kc9g7\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.535093 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: E0122 15:49:28.535793 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af574794-cc05-40fd-8dce-0497c37a9888" containerName="cloudkitty-proc" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.535810 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="af574794-cc05-40fd-8dce-0497c37a9888" containerName="cloudkitty-proc" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.536158 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="af574794-cc05-40fd-8dce-0497c37a9888" containerName="cloudkitty-proc" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.537782 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.541288 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.541571 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.541729 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.542233 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.543016 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.543132 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qcdwt" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.543248 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.545173 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.734358 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.799653 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.809113 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.834995 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.836850 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839312 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839417 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839469 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgs6b\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-kube-api-access-qgs6b\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839532 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80f35c3b-7247-4a39-8562-d68602381fa1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839561 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839587 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839640 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839762 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-config-data\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839829 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.839862 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80f35c3b-7247-4a39-8562-d68602381fa1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.840040 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.845207 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.854791 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.942854 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.943282 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.943552 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-config-data\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.943653 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnflz\" (UniqueName: \"kubernetes.io/projected/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-kube-api-access-cnflz\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.943735 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.943816 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80f35c3b-7247-4a39-8562-d68602381fa1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.944028 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.944384 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.944444 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-certs\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.944553 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.944635 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.944970 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-scripts\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.945057 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.945126 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-config-data\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.945214 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgs6b\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-kube-api-access-qgs6b\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.945759 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80f35c3b-7247-4a39-8562-d68602381fa1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.945796 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.950487 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.952277 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-config-data\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.952659 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.952672 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80f35c3b-7247-4a39-8562-d68602381fa1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.952931 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.955612 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80f35c3b-7247-4a39-8562-d68602381fa1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.957395 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.957515 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e00e5ecb6ed7059a52e386faabffab5919b9b0484b9559a256c53a584854955/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.960306 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.962179 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.964503 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80f35c3b-7247-4a39-8562-d68602381fa1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:28 crc kubenswrapper[4825]: I0122 15:49:28.966773 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgs6b\" (UniqueName: \"kubernetes.io/projected/80f35c3b-7247-4a39-8562-d68602381fa1-kube-api-access-qgs6b\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.048294 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-certs\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.048342 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.048398 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.048451 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-scripts\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.048498 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-config-data\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.050823 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnflz\" (UniqueName: \"kubernetes.io/projected/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-kube-api-access-cnflz\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.053468 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.053668 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-certs\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.054543 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-scripts\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.061430 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.070122 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95940e65-101e-4442-bd35-6faf7fdb6d15\") pod \"rabbitmq-server-0\" (UID: \"80f35c3b-7247-4a39-8562-d68602381fa1\") " pod="openstack/rabbitmq-server-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.073139 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-config-data\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.096717 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnflz\" (UniqueName: \"kubernetes.io/projected/d1a2b167-da42-48fa-9e6b-0038aa5a36ce-kube-api-access-cnflz\") pod \"cloudkitty-proc-0\" (UID: \"d1a2b167-da42-48fa-9e6b-0038aa5a36ce\") " pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.122517 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.131589 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.272357 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273043 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-plugins\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273361 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmkrj\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-kube-api-access-tmkrj\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273395 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-erlang-cookie\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273437 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/215992ea-1abc-44d0-925b-799eb87bcc09-pod-info\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273454 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-tls\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273519 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273542 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-server-conf\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273572 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-config-data\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.273675 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-plugins-conf\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.276157 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.279904 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.280093 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/215992ea-1abc-44d0-925b-799eb87bcc09-erlang-cookie-secret\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.281157 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.281841 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.286722 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.299098 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-kube-api-access-tmkrj" (OuterVolumeSpecName: "kube-api-access-tmkrj") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "kube-api-access-tmkrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.300332 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215992ea-1abc-44d0-925b-799eb87bcc09-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.301014 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/215992ea-1abc-44d0-925b-799eb87bcc09-pod-info" (OuterVolumeSpecName: "pod-info") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.325876 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.336309 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91" (OuterVolumeSpecName: "persistence") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389285 4825 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389333 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") on node \"crc\" " Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389347 4825 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/215992ea-1abc-44d0-925b-799eb87bcc09-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389358 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmkrj\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-kube-api-access-tmkrj\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389366 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389374 4825 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/215992ea-1abc-44d0-925b-799eb87bcc09-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.389381 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.390640 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-server-conf" (OuterVolumeSpecName: "server-conf") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.404684 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-config-data" (OuterVolumeSpecName: "config-data") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.434878 4825 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.435299 4825 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91") on node "crc" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.445777 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"215992ea-1abc-44d0-925b-799eb87bcc09","Type":"ContainerDied","Data":"84fcdb122a66fc7642a4594c94dba1cabd8125136cd4440e782e4b0be2113eec"} Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.445859 4825 scope.go:117] "RemoveContainer" containerID="fe630163da9699c6ae7767c15986a0a522ba8248bdbb3d16653256e23ac471e7" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.446103 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.492361 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.492387 4825 reconciler_common.go:293] "Volume detached for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.492398 4825 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/215992ea-1abc-44d0-925b-799eb87bcc09-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.533492 4825 scope.go:117] "RemoveContainer" containerID="184e43136592bf3469b06dc128b988a48972055ca89cc79136bb1b491d6c7e34" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.539732 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e6f05d-8a80-49ca-add6-e8c41572b664" path="/var/lib/kubelet/pods/45e6f05d-8a80-49ca-add6-e8c41572b664/volumes" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.540936 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af574794-cc05-40fd-8dce-0497c37a9888" path="/var/lib/kubelet/pods/af574794-cc05-40fd-8dce-0497c37a9888/volumes" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.593956 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.594454 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd\") pod \"215992ea-1abc-44d0-925b-799eb87bcc09\" (UID: \"215992ea-1abc-44d0-925b-799eb87bcc09\") " Jan 22 15:49:29 crc kubenswrapper[4825]: W0122 15:49:29.594670 4825 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/215992ea-1abc-44d0-925b-799eb87bcc09/volumes/kubernetes.io~projected/rabbitmq-confd Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.594694 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "215992ea-1abc-44d0-925b-799eb87bcc09" (UID: "215992ea-1abc-44d0-925b-799eb87bcc09"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.595193 4825 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/215992ea-1abc-44d0-925b-799eb87bcc09-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:29 crc kubenswrapper[4825]: I0122 15:49:29.898468 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-kc9g7"] Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.207289 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:49:30 crc kubenswrapper[4825]: W0122 15:49:30.210012 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1a2b167_da42_48fa_9e6b_0038aa5a36ce.slice/crio-3fe8a45f2477f5eaa176af5a1c4eb1f1f14129ecfcd25cf694be9bd3edfe931a WatchSource:0}: Error finding container 3fe8a45f2477f5eaa176af5a1c4eb1f1f14129ecfcd25cf694be9bd3edfe931a: Status 404 returned error can't find the container with id 3fe8a45f2477f5eaa176af5a1c4eb1f1f14129ecfcd25cf694be9bd3edfe931a Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.213476 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.241352 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.266683 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.390522 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.414470 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:49:30 crc kubenswrapper[4825]: E0122 15:49:30.415088 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="rabbitmq" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.415112 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="rabbitmq" Jan 22 15:49:30 crc kubenswrapper[4825]: E0122 15:49:30.415139 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="setup-container" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.415146 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="setup-container" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.415370 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" containerName="rabbitmq" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.416806 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.431329 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.431800 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.431846 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.432092 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.432828 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.433595 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-njvz2" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.433767 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.444052 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.502724 4825 generic.go:334] "Generic (PLEG): container finished" podID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerID="15e44a45d041b0b1e8eaa16c3a8109a4f9761c79112275e5232fd874d578073a" exitCode=0 Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.502765 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa","Type":"ContainerDied","Data":"15e44a45d041b0b1e8eaa16c3a8109a4f9761c79112275e5232fd874d578073a"} Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.504571 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" event={"ID":"46958c79-89ff-48e9-bb5f-f4ab34575bea","Type":"ContainerStarted","Data":"508c530679702a78ef35adb2f137c5721fb92a653eaa6d3937d4980442dc6394"} Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.510490 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"d1a2b167-da42-48fa-9e6b-0038aa5a36ce","Type":"ContainerStarted","Data":"3fe8a45f2477f5eaa176af5a1c4eb1f1f14129ecfcd25cf694be9bd3edfe931a"} Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.522097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80f35c3b-7247-4a39-8562-d68602381fa1","Type":"ContainerStarted","Data":"5ee381b06bd8d35ab3176221c40688a840e70512fb63d2468eabffd9579fc030"} Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589193 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efaf42df-9ed2-41b2-b660-bacb51298b2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589234 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589272 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589344 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589364 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589462 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589506 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589554 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589583 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589607 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrc7\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-kube-api-access-nxrc7\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.589630 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efaf42df-9ed2-41b2-b660-bacb51298b2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.639936 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.693590 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.694182 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.694297 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrc7\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-kube-api-access-nxrc7\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.694399 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efaf42df-9ed2-41b2-b660-bacb51298b2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.694524 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efaf42df-9ed2-41b2-b660-bacb51298b2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.694684 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.694841 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.695040 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.695162 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.695471 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.695569 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.697411 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.698148 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.703943 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efaf42df-9ed2-41b2-b660-bacb51298b2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.709017 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.710043 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.710461 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efaf42df-9ed2-41b2-b660-bacb51298b2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.710877 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efaf42df-9ed2-41b2-b660-bacb51298b2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.718700 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.718751 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5cc72a44005dd2703cc9543bdd999652809794a76249037f23b6c30ba242223/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.720838 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrc7\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-kube-api-access-nxrc7\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.721837 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.722394 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efaf42df-9ed2-41b2-b660-bacb51298b2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.776880 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47a1d02d-d1c3-4e22-85a7-0892a315dd91\") pod \"rabbitmq-cell1-server-0\" (UID: \"efaf42df-9ed2-41b2-b660-bacb51298b2c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797065 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbv4q\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-kube-api-access-jbv4q\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797128 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data-custom\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797188 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-scripts\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797236 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797284 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-public-tls-certs\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797380 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-logs\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797463 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-combined-ca-bundle\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797511 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-internal-tls-certs\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.797667 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-certs\") pod \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\" (UID: \"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa\") " Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.801768 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-logs" (OuterVolumeSpecName: "logs") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.803551 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-kube-api-access-jbv4q" (OuterVolumeSpecName: "kube-api-access-jbv4q") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "kube-api-access-jbv4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.804313 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-certs" (OuterVolumeSpecName: "certs") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.808245 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.835289 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-scripts" (OuterVolumeSpecName: "scripts") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:30 crc kubenswrapper[4825]: I0122 15:49:30.875376 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.103680 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.105522 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107171 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbv4q\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-kube-api-access-jbv4q\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107195 4825 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107205 4825 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107213 4825 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107224 4825 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-logs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107233 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.107271 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.115333 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data" (OuterVolumeSpecName: "config-data") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.130937 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" (UID: "9008ffc7-d936-4eb7-a1c0-8d36f776d9aa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.210237 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.210275 4825 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.536666 4825 generic.go:334] "Generic (PLEG): container finished" podID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerID="ee2541af36c810a355aaa1df0357b556a6b23dba2e53bd88ec8fd232eb6ec607" exitCode=0 Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.542247 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215992ea-1abc-44d0-925b-799eb87bcc09" path="/var/lib/kubelet/pods/215992ea-1abc-44d0-925b-799eb87bcc09/volumes" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.543800 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" event={"ID":"46958c79-89ff-48e9-bb5f-f4ab34575bea","Type":"ContainerDied","Data":"ee2541af36c810a355aaa1df0357b556a6b23dba2e53bd88ec8fd232eb6ec607"} Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.543832 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"d1a2b167-da42-48fa-9e6b-0038aa5a36ce","Type":"ContainerStarted","Data":"2186b9d83f06890b56ed8eb1fcc1077bb5b36781f608cb8b402f018cce693d05"} Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.545167 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9008ffc7-d936-4eb7-a1c0-8d36f776d9aa","Type":"ContainerDied","Data":"33612e08d8ba56ec8dc58ad30b63ea57b5a19b5e6eab6a26bbe4bdd459df5678"} Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.545221 4825 scope.go:117] "RemoveContainer" containerID="15e44a45d041b0b1e8eaa16c3a8109a4f9761c79112275e5232fd874d578073a" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.545228 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.591533 4825 scope.go:117] "RemoveContainer" containerID="0e7537ea16924c190c2747c11144a108589391b9e8c36671010dae31af419e92" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.605797 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.256269365 podStartE2EDuration="3.605775316s" podCreationTimestamp="2026-01-22 15:49:28 +0000 UTC" firstStartedPulling="2026-01-22 15:49:30.213149491 +0000 UTC m=+1516.974676401" lastFinishedPulling="2026-01-22 15:49:30.562655442 +0000 UTC m=+1517.324182352" observedRunningTime="2026-01-22 15:49:31.60344261 +0000 UTC m=+1518.364969520" watchObservedRunningTime="2026-01-22 15:49:31.605775316 +0000 UTC m=+1518.367302226" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.832762 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.854209 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.868136 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:49:31 crc kubenswrapper[4825]: W0122 15:49:31.881333 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefaf42df_9ed2_41b2_b660_bacb51298b2c.slice/crio-e6706df5cf51c5c870bf88c4a87f9d788e4dc5f3f5ef8146664903dd67bb51aa WatchSource:0}: Error finding container e6706df5cf51c5c870bf88c4a87f9d788e4dc5f3f5ef8146664903dd67bb51aa: Status 404 returned error can't find the container with id e6706df5cf51c5c870bf88c4a87f9d788e4dc5f3f5ef8146664903dd67bb51aa Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.890227 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:49:31 crc kubenswrapper[4825]: E0122 15:49:31.890760 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.890779 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" Jan 22 15:49:31 crc kubenswrapper[4825]: E0122 15:49:31.890832 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api-log" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.890845 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api-log" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.891155 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.891168 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api-log" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.892916 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.899504 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.899598 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.899872 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.910113 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.991657 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.991741 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.991911 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-config-data\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.992139 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a099ee6b-e91c-4017-92ec-ad9289342d56-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.992200 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.992260 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-scripts\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.992286 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a099ee6b-e91c-4017-92ec-ad9289342d56-logs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.992308 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pv97\" (UniqueName: \"kubernetes.io/projected/a099ee6b-e91c-4017-92ec-ad9289342d56-kube-api-access-7pv97\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:31 crc kubenswrapper[4825]: I0122 15:49:31.992392 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.093826 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a099ee6b-e91c-4017-92ec-ad9289342d56-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.093887 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.093938 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-scripts\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.093960 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a099ee6b-e91c-4017-92ec-ad9289342d56-logs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.093999 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pv97\" (UniqueName: \"kubernetes.io/projected/a099ee6b-e91c-4017-92ec-ad9289342d56-kube-api-access-7pv97\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.094056 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.094089 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.094124 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.094203 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-config-data\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.095028 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a099ee6b-e91c-4017-92ec-ad9289342d56-logs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.097424 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.098482 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-config-data\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.098545 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.099629 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-scripts\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.100648 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a099ee6b-e91c-4017-92ec-ad9289342d56-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.101210 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.101708 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a099ee6b-e91c-4017-92ec-ad9289342d56-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.129101 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pv97\" (UniqueName: \"kubernetes.io/projected/a099ee6b-e91c-4017-92ec-ad9289342d56-kube-api-access-7pv97\") pod \"cloudkitty-api-0\" (UID: \"a099ee6b-e91c-4017-92ec-ad9289342d56\") " pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.415697 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.561138 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" event={"ID":"46958c79-89ff-48e9-bb5f-f4ab34575bea","Type":"ContainerStarted","Data":"6e796a891254fd632b2264705b6587b4e0bc2d248b5a72d2ec807c8d34d00b15"} Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.561287 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.562960 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efaf42df-9ed2-41b2-b660-bacb51298b2c","Type":"ContainerStarted","Data":"e6706df5cf51c5c870bf88c4a87f9d788e4dc5f3f5ef8146664903dd67bb51aa"} Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.592830 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" podStartSLOduration=5.592804569 podStartE2EDuration="5.592804569s" podCreationTimestamp="2026-01-22 15:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:49:32.581044647 +0000 UTC m=+1519.342571567" watchObservedRunningTime="2026-01-22 15:49:32.592804569 +0000 UTC m=+1519.354331489" Jan 22 15:49:32 crc kubenswrapper[4825]: I0122 15:49:32.974151 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 22 15:49:33 crc kubenswrapper[4825]: I0122 15:49:33.584852 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" path="/var/lib/kubelet/pods/9008ffc7-d936-4eb7-a1c0-8d36f776d9aa/volumes" Jan 22 15:49:33 crc kubenswrapper[4825]: I0122 15:49:33.633186 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a099ee6b-e91c-4017-92ec-ad9289342d56","Type":"ContainerStarted","Data":"49373ccd713735a06766267eb38e70b855928a6babb235f2f8f0a8730541d32d"} Jan 22 15:49:34 crc kubenswrapper[4825]: I0122 15:49:34.750820 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a099ee6b-e91c-4017-92ec-ad9289342d56","Type":"ContainerStarted","Data":"b009070cacc243a8e57e6febd4e73d69c86462da8061fc4c546dc7b01c46227e"} Jan 22 15:49:35 crc kubenswrapper[4825]: I0122 15:49:35.384303 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="9008ffc7-d936-4eb7-a1c0-8d36f776d9aa" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.202:8889/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 15:49:35 crc kubenswrapper[4825]: I0122 15:49:35.957363 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a099ee6b-e91c-4017-92ec-ad9289342d56","Type":"ContainerStarted","Data":"acaeb75a7196ea78c33e5090830ade6b981ede691f94a62a441b4ea1ef487650"} Jan 22 15:49:35 crc kubenswrapper[4825]: I0122 15:49:35.958043 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 22 15:49:35 crc kubenswrapper[4825]: I0122 15:49:35.993364 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=4.993338604 podStartE2EDuration="4.993338604s" podCreationTimestamp="2026-01-22 15:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:49:35.98538618 +0000 UTC m=+1522.746913110" watchObservedRunningTime="2026-01-22 15:49:35.993338604 +0000 UTC m=+1522.754865514" Jan 22 15:49:38 crc kubenswrapper[4825]: I0122 15:49:38.736056 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:38 crc kubenswrapper[4825]: I0122 15:49:38.807440 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dllkj"] Jan 22 15:49:38 crc kubenswrapper[4825]: I0122 15:49:38.807791 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-dllkj" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" containerName="dnsmasq-dns" containerID="cri-o://ba809397fea502c09c2186b8a90756d6ed5509e6e8b2c26c43da9495af3628c0" gracePeriod=10 Jan 22 15:49:38 crc kubenswrapper[4825]: I0122 15:49:38.984972 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-4w7hj"] Jan 22 15:49:38 crc kubenswrapper[4825]: I0122 15:49:38.991183 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.013177 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efaf42df-9ed2-41b2-b660-bacb51298b2c","Type":"ContainerStarted","Data":"184810aa0098feaab4414bc9831b8778f88640f5c663a24f73f581a1ab0ad519"} Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.021474 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-4w7hj"] Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.031504 4825 generic.go:334] "Generic (PLEG): container finished" podID="60992c6a-3516-45a6-ab46-7705b343bf46" containerID="ba809397fea502c09c2186b8a90756d6ed5509e6e8b2c26c43da9495af3628c0" exitCode=0 Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.031572 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dllkj" event={"ID":"60992c6a-3516-45a6-ab46-7705b343bf46","Type":"ContainerDied","Data":"ba809397fea502c09c2186b8a90756d6ed5509e6e8b2c26c43da9495af3628c0"} Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086077 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086155 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086196 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086344 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-config\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086412 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086684 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.086710 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgp9\" (UniqueName: \"kubernetes.io/projected/d69d25bc-5530-4482-9394-2d89c1b92f5a-kube-api-access-wdgp9\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190040 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190302 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190335 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190370 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-config\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190504 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.190523 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgp9\" (UniqueName: \"kubernetes.io/projected/d69d25bc-5530-4482-9394-2d89c1b92f5a-kube-api-access-wdgp9\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.191424 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.191632 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.192530 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.193123 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.193171 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.193194 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d25bc-5530-4482-9394-2d89c1b92f5a-config\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.225345 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgp9\" (UniqueName: \"kubernetes.io/projected/d69d25bc-5530-4482-9394-2d89c1b92f5a-kube-api-access-wdgp9\") pod \"dnsmasq-dns-c4b758ff5-4w7hj\" (UID: \"d69d25bc-5530-4482-9394-2d89c1b92f5a\") " pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.345743 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.573693 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.607552 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7wq\" (UniqueName: \"kubernetes.io/projected/60992c6a-3516-45a6-ab46-7705b343bf46-kube-api-access-hj7wq\") pod \"60992c6a-3516-45a6-ab46-7705b343bf46\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.607602 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-nb\") pod \"60992c6a-3516-45a6-ab46-7705b343bf46\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.607715 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-swift-storage-0\") pod \"60992c6a-3516-45a6-ab46-7705b343bf46\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.607932 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-config\") pod \"60992c6a-3516-45a6-ab46-7705b343bf46\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.608065 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-sb\") pod \"60992c6a-3516-45a6-ab46-7705b343bf46\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.608094 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-svc\") pod \"60992c6a-3516-45a6-ab46-7705b343bf46\" (UID: \"60992c6a-3516-45a6-ab46-7705b343bf46\") " Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.613412 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60992c6a-3516-45a6-ab46-7705b343bf46-kube-api-access-hj7wq" (OuterVolumeSpecName: "kube-api-access-hj7wq") pod "60992c6a-3516-45a6-ab46-7705b343bf46" (UID: "60992c6a-3516-45a6-ab46-7705b343bf46"). InnerVolumeSpecName "kube-api-access-hj7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.686217 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60992c6a-3516-45a6-ab46-7705b343bf46" (UID: "60992c6a-3516-45a6-ab46-7705b343bf46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.704142 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "60992c6a-3516-45a6-ab46-7705b343bf46" (UID: "60992c6a-3516-45a6-ab46-7705b343bf46"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.711182 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.711504 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj7wq\" (UniqueName: \"kubernetes.io/projected/60992c6a-3516-45a6-ab46-7705b343bf46-kube-api-access-hj7wq\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.711609 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.712305 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-config" (OuterVolumeSpecName: "config") pod "60992c6a-3516-45a6-ab46-7705b343bf46" (UID: "60992c6a-3516-45a6-ab46-7705b343bf46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.727053 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60992c6a-3516-45a6-ab46-7705b343bf46" (UID: "60992c6a-3516-45a6-ab46-7705b343bf46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.739902 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60992c6a-3516-45a6-ab46-7705b343bf46" (UID: "60992c6a-3516-45a6-ab46-7705b343bf46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.814280 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.814330 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.814342 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60992c6a-3516-45a6-ab46-7705b343bf46-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:39 crc kubenswrapper[4825]: W0122 15:49:39.943704 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69d25bc_5530_4482_9394_2d89c1b92f5a.slice/crio-2f644c64c9195ef89acf69f71222e2cd338a3273510f2f62d57cb79ac487552f WatchSource:0}: Error finding container 2f644c64c9195ef89acf69f71222e2cd338a3273510f2f62d57cb79ac487552f: Status 404 returned error can't find the container with id 2f644c64c9195ef89acf69f71222e2cd338a3273510f2f62d57cb79ac487552f Jan 22 15:49:39 crc kubenswrapper[4825]: I0122 15:49:39.943859 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-4w7hj"] Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.089924 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" event={"ID":"d69d25bc-5530-4482-9394-2d89c1b92f5a","Type":"ContainerStarted","Data":"2f644c64c9195ef89acf69f71222e2cd338a3273510f2f62d57cb79ac487552f"} Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.093096 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dllkj" event={"ID":"60992c6a-3516-45a6-ab46-7705b343bf46","Type":"ContainerDied","Data":"9c18c6bc1c92614a70a6d1a3fccb7cc1b6f3fdf277f0d84ff92b2a0e3a957a92"} Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.093131 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dllkj" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.093162 4825 scope.go:117] "RemoveContainer" containerID="ba809397fea502c09c2186b8a90756d6ed5509e6e8b2c26c43da9495af3628c0" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.231914 4825 scope.go:117] "RemoveContainer" containerID="fb33445c888cfd4d541506e30cceeb38a287f28f0b9c865ca1495f18a6829ab6" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.249417 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dllkj"] Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.271005 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dllkj"] Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.724267 4825 scope.go:117] "RemoveContainer" containerID="a673f8887881f54506ef484bd7fdd9ac65362dad0df6b28c96294289ab4eac9d" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.758670 4825 scope.go:117] "RemoveContainer" containerID="ad3f02ef8e31a96f53c232cd54cbc40bb4131c68f786f32bfdc467a20dc5f556" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.790346 4825 scope.go:117] "RemoveContainer" containerID="70ed6d4cfa15bfebff09f2f59348d6d8dd1b0f5db11d2ffedada79e265b7aa86" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.823549 4825 scope.go:117] "RemoveContainer" containerID="d6ec51704f098a43ef8bed3b68a5e1218851817643fb77be6fc903347ae82b9a" Jan 22 15:49:40 crc kubenswrapper[4825]: I0122 15:49:40.850686 4825 scope.go:117] "RemoveContainer" containerID="0e4518939dfa38f9e7b7dcf8cc8c2f5d2fb4b6f1d9599c0b587eb94d42c0f21d" Jan 22 15:49:41 crc kubenswrapper[4825]: I0122 15:49:41.108778 4825 generic.go:334] "Generic (PLEG): container finished" podID="d69d25bc-5530-4482-9394-2d89c1b92f5a" containerID="4341542b5e05370c37d27db3a2d7446ada250126685e4e44009e651bfdd81ac0" exitCode=0 Jan 22 15:49:41 crc kubenswrapper[4825]: I0122 15:49:41.108822 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" event={"ID":"d69d25bc-5530-4482-9394-2d89c1b92f5a","Type":"ContainerDied","Data":"4341542b5e05370c37d27db3a2d7446ada250126685e4e44009e651bfdd81ac0"} Jan 22 15:49:41 crc kubenswrapper[4825]: I0122 15:49:41.535486 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" path="/var/lib/kubelet/pods/60992c6a-3516-45a6-ab46-7705b343bf46/volumes" Jan 22 15:49:42 crc kubenswrapper[4825]: I0122 15:49:42.123641 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" event={"ID":"d69d25bc-5530-4482-9394-2d89c1b92f5a","Type":"ContainerStarted","Data":"bc72c50855682c5722b1c775eda28242d6c56a5c8c4d5af68547c05412a79755"} Jan 22 15:49:42 crc kubenswrapper[4825]: I0122 15:49:42.124947 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:42 crc kubenswrapper[4825]: I0122 15:49:42.131935 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80f35c3b-7247-4a39-8562-d68602381fa1","Type":"ContainerStarted","Data":"f9c320f9a34590efd0b8d31094a0844d02fc2ba4a561bd507fb2bc3c0318e368"} Jan 22 15:49:42 crc kubenswrapper[4825]: I0122 15:49:42.156727 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" podStartSLOduration=4.15670013 podStartE2EDuration="4.15670013s" podCreationTimestamp="2026-01-22 15:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:49:42.148239831 +0000 UTC m=+1528.909766751" watchObservedRunningTime="2026-01-22 15:49:42.15670013 +0000 UTC m=+1528.918227060" Jan 22 15:49:45 crc kubenswrapper[4825]: I0122 15:49:45.161583 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 15:49:49 crc kubenswrapper[4825]: I0122 15:49:49.348213 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-4w7hj" Jan 22 15:49:49 crc kubenswrapper[4825]: I0122 15:49:49.450264 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-kc9g7"] Jan 22 15:49:49 crc kubenswrapper[4825]: I0122 15:49:49.450859 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerName="dnsmasq-dns" containerID="cri-o://6e796a891254fd632b2264705b6587b4e0bc2d248b5a72d2ec807c8d34d00b15" gracePeriod=10 Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.225567 4825 generic.go:334] "Generic (PLEG): container finished" podID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerID="6e796a891254fd632b2264705b6587b4e0bc2d248b5a72d2ec807c8d34d00b15" exitCode=0 Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.225667 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" event={"ID":"46958c79-89ff-48e9-bb5f-f4ab34575bea","Type":"ContainerDied","Data":"6e796a891254fd632b2264705b6587b4e0bc2d248b5a72d2ec807c8d34d00b15"} Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.225843 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" event={"ID":"46958c79-89ff-48e9-bb5f-f4ab34575bea","Type":"ContainerDied","Data":"508c530679702a78ef35adb2f137c5721fb92a653eaa6d3937d4980442dc6394"} Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.225861 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508c530679702a78ef35adb2f137c5721fb92a653eaa6d3937d4980442dc6394" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.306881 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389263 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-sb\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389491 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-nb\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389509 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-config\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389532 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-openstack-edpm-ipam\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389605 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389632 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkpzn\" (UniqueName: \"kubernetes.io/projected/46958c79-89ff-48e9-bb5f-f4ab34575bea-kube-api-access-tkpzn\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.389727 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-swift-storage-0\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.398410 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46958c79-89ff-48e9-bb5f-f4ab34575bea-kube-api-access-tkpzn" (OuterVolumeSpecName: "kube-api-access-tkpzn") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "kube-api-access-tkpzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.471616 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-config" (OuterVolumeSpecName: "config") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.485001 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.488180 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.489542 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.490893 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.491751 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc\") pod \"46958c79-89ff-48e9-bb5f-f4ab34575bea\" (UID: \"46958c79-89ff-48e9-bb5f-f4ab34575bea\") " Jan 22 15:49:51 crc kubenswrapper[4825]: W0122 15:49:51.491863 4825 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/46958c79-89ff-48e9-bb5f-f4ab34575bea/volumes/kubernetes.io~configmap/dns-svc Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.491877 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.492372 4825 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.492395 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkpzn\" (UniqueName: \"kubernetes.io/projected/46958c79-89ff-48e9-bb5f-f4ab34575bea-kube-api-access-tkpzn\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.492408 4825 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.492419 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.492430 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-config\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.492441 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.498185 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46958c79-89ff-48e9-bb5f-f4ab34575bea" (UID: "46958c79-89ff-48e9-bb5f-f4ab34575bea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 15:49:51 crc kubenswrapper[4825]: I0122 15:49:51.595033 4825 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46958c79-89ff-48e9-bb5f-f4ab34575bea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 15:49:52 crc kubenswrapper[4825]: I0122 15:49:52.233254 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-kc9g7" Jan 22 15:49:52 crc kubenswrapper[4825]: I0122 15:49:52.272943 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-kc9g7"] Jan 22 15:49:52 crc kubenswrapper[4825]: I0122 15:49:52.290167 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-kc9g7"] Jan 22 15:49:53 crc kubenswrapper[4825]: I0122 15:49:53.530915 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" path="/var/lib/kubelet/pods/46958c79-89ff-48e9-bb5f-f4ab34575bea/volumes" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.737754 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6"] Jan 22 15:50:02 crc kubenswrapper[4825]: E0122 15:50:02.740711 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerName="dnsmasq-dns" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.740814 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerName="dnsmasq-dns" Jan 22 15:50:02 crc kubenswrapper[4825]: E0122 15:50:02.740897 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" containerName="dnsmasq-dns" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.740972 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" containerName="dnsmasq-dns" Jan 22 15:50:02 crc kubenswrapper[4825]: E0122 15:50:02.741104 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" containerName="init" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.741190 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" containerName="init" Jan 22 15:50:02 crc kubenswrapper[4825]: E0122 15:50:02.741293 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerName="init" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.741369 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerName="init" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.741735 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="46958c79-89ff-48e9-bb5f-f4ab34575bea" containerName="dnsmasq-dns" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.741821 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="60992c6a-3516-45a6-ab46-7705b343bf46" containerName="dnsmasq-dns" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.743082 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.746449 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.746807 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.747559 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.753101 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.759352 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6"] Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.833324 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.833621 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.833659 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv8zb\" (UniqueName: \"kubernetes.io/projected/07e8bf5e-6706-4987-8447-e918785d8f38-kube-api-access-xv8zb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.833725 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.941597 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.941728 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.941787 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv8zb\" (UniqueName: \"kubernetes.io/projected/07e8bf5e-6706-4987-8447-e918785d8f38-kube-api-access-xv8zb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.941901 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.956456 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.956539 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.973366 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:02 crc kubenswrapper[4825]: I0122 15:50:02.974049 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv8zb\" (UniqueName: \"kubernetes.io/projected/07e8bf5e-6706-4987-8447-e918785d8f38-kube-api-access-xv8zb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:03 crc kubenswrapper[4825]: I0122 15:50:03.081148 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:03 crc kubenswrapper[4825]: I0122 15:50:03.711105 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6"] Jan 22 15:50:04 crc kubenswrapper[4825]: I0122 15:50:04.360169 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" event={"ID":"07e8bf5e-6706-4987-8447-e918785d8f38","Type":"ContainerStarted","Data":"7cbe5ea3f5df44ddb05529cd2c94ce321e5057a89452b2ccb24c871a33a3dec5"} Jan 22 15:50:10 crc kubenswrapper[4825]: I0122 15:50:10.120701 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 22 15:50:11 crc kubenswrapper[4825]: I0122 15:50:11.458835 4825 generic.go:334] "Generic (PLEG): container finished" podID="efaf42df-9ed2-41b2-b660-bacb51298b2c" containerID="184810aa0098feaab4414bc9831b8778f88640f5c663a24f73f581a1ab0ad519" exitCode=0 Jan 22 15:50:11 crc kubenswrapper[4825]: I0122 15:50:11.458957 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efaf42df-9ed2-41b2-b660-bacb51298b2c","Type":"ContainerDied","Data":"184810aa0098feaab4414bc9831b8778f88640f5c663a24f73f581a1ab0ad519"} Jan 22 15:50:17 crc kubenswrapper[4825]: I0122 15:50:17.454241 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="a099ee6b-e91c-4017-92ec-ad9289342d56" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.251:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:50:17 crc kubenswrapper[4825]: I0122 15:50:17.454241 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="a099ee6b-e91c-4017-92ec-ad9289342d56" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.251:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 15:50:20 crc kubenswrapper[4825]: E0122 15:50:20.747572 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 22 15:50:20 crc kubenswrapper[4825]: E0122 15:50:20.748525 4825 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 15:50:20 crc kubenswrapper[4825]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 22 15:50:20 crc kubenswrapper[4825]: - hosts: all Jan 22 15:50:20 crc kubenswrapper[4825]: strategy: linear Jan 22 15:50:20 crc kubenswrapper[4825]: tasks: Jan 22 15:50:20 crc kubenswrapper[4825]: - name: Enable podified-repos Jan 22 15:50:20 crc kubenswrapper[4825]: become: true Jan 22 15:50:20 crc kubenswrapper[4825]: ansible.builtin.shell: | Jan 22 15:50:20 crc kubenswrapper[4825]: set -euxo pipefail Jan 22 15:50:20 crc kubenswrapper[4825]: pushd /var/tmp Jan 22 15:50:20 crc kubenswrapper[4825]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 22 15:50:20 crc kubenswrapper[4825]: pushd repo-setup-main Jan 22 15:50:20 crc kubenswrapper[4825]: python3 -m venv ./venv Jan 22 15:50:20 crc kubenswrapper[4825]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 22 15:50:20 crc kubenswrapper[4825]: ./venv/bin/repo-setup current-podified -b antelope Jan 22 15:50:20 crc kubenswrapper[4825]: popd Jan 22 15:50:20 crc kubenswrapper[4825]: rm -rf repo-setup-main Jan 22 15:50:20 crc kubenswrapper[4825]: Jan 22 15:50:20 crc kubenswrapper[4825]: Jan 22 15:50:20 crc kubenswrapper[4825]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 22 15:50:20 crc kubenswrapper[4825]: edpm_override_hosts: openstack-edpm-ipam Jan 22 15:50:20 crc kubenswrapper[4825]: edpm_service_type: repo-setup Jan 22 15:50:20 crc kubenswrapper[4825]: Jan 22 15:50:20 crc kubenswrapper[4825]: Jan 22 15:50:20 crc kubenswrapper[4825]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv8zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6_openstack(07e8bf5e-6706-4987-8447-e918785d8f38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 22 15:50:20 crc kubenswrapper[4825]: > logger="UnhandledError" Jan 22 15:50:20 crc kubenswrapper[4825]: E0122 15:50:20.751279 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" podUID="07e8bf5e-6706-4987-8447-e918785d8f38" Jan 22 15:50:21 crc kubenswrapper[4825]: I0122 15:50:21.630299 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efaf42df-9ed2-41b2-b660-bacb51298b2c","Type":"ContainerStarted","Data":"acab48ba083b3fd992b63c0a91edf394cb1f32a3a57d02c64bf868498a9e7cd3"} Jan 22 15:50:21 crc kubenswrapper[4825]: I0122 15:50:21.631330 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:50:21 crc kubenswrapper[4825]: E0122 15:50:21.632067 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" podUID="07e8bf5e-6706-4987-8447-e918785d8f38" Jan 22 15:50:21 crc kubenswrapper[4825]: I0122 15:50:21.683792 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.683760831 podStartE2EDuration="51.683760831s" podCreationTimestamp="2026-01-22 15:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:50:21.677928537 +0000 UTC m=+1568.439455457" watchObservedRunningTime="2026-01-22 15:50:21.683760831 +0000 UTC m=+1568.445287741" Jan 22 15:50:22 crc kubenswrapper[4825]: I0122 15:50:22.644053 4825 generic.go:334] "Generic (PLEG): container finished" podID="80f35c3b-7247-4a39-8562-d68602381fa1" containerID="f9c320f9a34590efd0b8d31094a0844d02fc2ba4a561bd507fb2bc3c0318e368" exitCode=0 Jan 22 15:50:22 crc kubenswrapper[4825]: I0122 15:50:22.645289 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80f35c3b-7247-4a39-8562-d68602381fa1","Type":"ContainerDied","Data":"f9c320f9a34590efd0b8d31094a0844d02fc2ba4a561bd507fb2bc3c0318e368"} Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.221970 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xhls8"] Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.224819 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.645522 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqcf\" (UniqueName: \"kubernetes.io/projected/dfb70118-609c-4996-a406-81f8ee496151-kube-api-access-glqcf\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.645645 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-utilities\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.645680 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-catalog-content\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.708067 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80f35c3b-7247-4a39-8562-d68602381fa1","Type":"ContainerStarted","Data":"c70be0b7c5b324f4784e756463ff52c01bb1438be61fe1c8f6da8f2f12b230f4"} Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.708138 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhls8"] Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.709225 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.747893 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-utilities\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.747961 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-catalog-content\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.748253 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqcf\" (UniqueName: \"kubernetes.io/projected/dfb70118-609c-4996-a406-81f8ee496151-kube-api-access-glqcf\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.748898 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-utilities\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.749226 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-catalog-content\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.758365 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.758329689 podStartE2EDuration="55.758329689s" podCreationTimestamp="2026-01-22 15:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 15:50:23.735826785 +0000 UTC m=+1570.497353725" watchObservedRunningTime="2026-01-22 15:50:23.758329689 +0000 UTC m=+1570.519856599" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.790607 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqcf\" (UniqueName: \"kubernetes.io/projected/dfb70118-609c-4996-a406-81f8ee496151-kube-api-access-glqcf\") pod \"redhat-marketplace-xhls8\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:23 crc kubenswrapper[4825]: I0122 15:50:23.994715 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:24 crc kubenswrapper[4825]: I0122 15:50:24.554593 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhls8"] Jan 22 15:50:24 crc kubenswrapper[4825]: I0122 15:50:24.717552 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerStarted","Data":"4e476cf824b2a2211361ee2b257e1d906438c1660839c1cda86101c83215b12a"} Jan 22 15:50:25 crc kubenswrapper[4825]: I0122 15:50:25.730728 4825 generic.go:334] "Generic (PLEG): container finished" podID="dfb70118-609c-4996-a406-81f8ee496151" containerID="5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701" exitCode=0 Jan 22 15:50:25 crc kubenswrapper[4825]: I0122 15:50:25.730788 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerDied","Data":"5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701"} Jan 22 15:50:27 crc kubenswrapper[4825]: I0122 15:50:27.922856 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerStarted","Data":"372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b"} Jan 22 15:50:28 crc kubenswrapper[4825]: I0122 15:50:28.933723 4825 generic.go:334] "Generic (PLEG): container finished" podID="dfb70118-609c-4996-a406-81f8ee496151" containerID="372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b" exitCode=0 Jan 22 15:50:28 crc kubenswrapper[4825]: I0122 15:50:28.933824 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerDied","Data":"372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b"} Jan 22 15:50:29 crc kubenswrapper[4825]: I0122 15:50:29.949103 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerStarted","Data":"ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8"} Jan 22 15:50:29 crc kubenswrapper[4825]: I0122 15:50:29.974249 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xhls8" podStartSLOduration=3.227937341 podStartE2EDuration="6.974229252s" podCreationTimestamp="2026-01-22 15:50:23 +0000 UTC" firstStartedPulling="2026-01-22 15:50:25.733764646 +0000 UTC m=+1572.495291556" lastFinishedPulling="2026-01-22 15:50:29.480056557 +0000 UTC m=+1576.241583467" observedRunningTime="2026-01-22 15:50:29.972471942 +0000 UTC m=+1576.733998862" watchObservedRunningTime="2026-01-22 15:50:29.974229252 +0000 UTC m=+1576.735756162" Jan 22 15:50:31 crc kubenswrapper[4825]: I0122 15:50:31.109139 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="efaf42df-9ed2-41b2-b660-bacb51298b2c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.250:5671: connect: connection refused" Jan 22 15:50:33 crc kubenswrapper[4825]: I0122 15:50:33.997699 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:33 crc kubenswrapper[4825]: I0122 15:50:33.998283 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:34 crc kubenswrapper[4825]: I0122 15:50:34.050448 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:35 crc kubenswrapper[4825]: I0122 15:50:35.055750 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:35 crc kubenswrapper[4825]: I0122 15:50:35.114069 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhls8"] Jan 22 15:50:35 crc kubenswrapper[4825]: I0122 15:50:35.281200 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:50:35 crc kubenswrapper[4825]: I0122 15:50:35.541887 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:50:35 crc kubenswrapper[4825]: I0122 15:50:35.542573 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:50:36 crc kubenswrapper[4825]: I0122 15:50:36.012671 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" event={"ID":"07e8bf5e-6706-4987-8447-e918785d8f38","Type":"ContainerStarted","Data":"1d71de35f6df563122d2ff34f757c6b54cc4d27696e39307b75cae9669469539"} Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.024127 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xhls8" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="registry-server" containerID="cri-o://ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8" gracePeriod=2 Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.575204 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.600604 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" podStartSLOduration=4.028088234 podStartE2EDuration="35.600582892s" podCreationTimestamp="2026-01-22 15:50:02 +0000 UTC" firstStartedPulling="2026-01-22 15:50:03.705675688 +0000 UTC m=+1550.467202598" lastFinishedPulling="2026-01-22 15:50:35.278170346 +0000 UTC m=+1582.039697256" observedRunningTime="2026-01-22 15:50:36.030334196 +0000 UTC m=+1582.791861106" watchObservedRunningTime="2026-01-22 15:50:37.600582892 +0000 UTC m=+1584.362109802" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.707161 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glqcf\" (UniqueName: \"kubernetes.io/projected/dfb70118-609c-4996-a406-81f8ee496151-kube-api-access-glqcf\") pod \"dfb70118-609c-4996-a406-81f8ee496151\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.707208 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-utilities\") pod \"dfb70118-609c-4996-a406-81f8ee496151\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.707310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-catalog-content\") pod \"dfb70118-609c-4996-a406-81f8ee496151\" (UID: \"dfb70118-609c-4996-a406-81f8ee496151\") " Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.708131 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-utilities" (OuterVolumeSpecName: "utilities") pod "dfb70118-609c-4996-a406-81f8ee496151" (UID: "dfb70118-609c-4996-a406-81f8ee496151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.723250 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb70118-609c-4996-a406-81f8ee496151-kube-api-access-glqcf" (OuterVolumeSpecName: "kube-api-access-glqcf") pod "dfb70118-609c-4996-a406-81f8ee496151" (UID: "dfb70118-609c-4996-a406-81f8ee496151"). InnerVolumeSpecName "kube-api-access-glqcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.733840 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfb70118-609c-4996-a406-81f8ee496151" (UID: "dfb70118-609c-4996-a406-81f8ee496151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.809875 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glqcf\" (UniqueName: \"kubernetes.io/projected/dfb70118-609c-4996-a406-81f8ee496151-kube-api-access-glqcf\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.810280 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:37 crc kubenswrapper[4825]: I0122 15:50:37.810350 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb70118-609c-4996-a406-81f8ee496151-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.037573 4825 generic.go:334] "Generic (PLEG): container finished" podID="dfb70118-609c-4996-a406-81f8ee496151" containerID="ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8" exitCode=0 Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.037630 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerDied","Data":"ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8"} Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.037664 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhls8" event={"ID":"dfb70118-609c-4996-a406-81f8ee496151","Type":"ContainerDied","Data":"4e476cf824b2a2211361ee2b257e1d906438c1660839c1cda86101c83215b12a"} Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.037686 4825 scope.go:117] "RemoveContainer" containerID="ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.037918 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhls8" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.063411 4825 scope.go:117] "RemoveContainer" containerID="372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.078583 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhls8"] Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.089068 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhls8"] Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.108163 4825 scope.go:117] "RemoveContainer" containerID="5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.137874 4825 scope.go:117] "RemoveContainer" containerID="ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8" Jan 22 15:50:38 crc kubenswrapper[4825]: E0122 15:50:38.144574 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8\": container with ID starting with ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8 not found: ID does not exist" containerID="ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.144641 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8"} err="failed to get container status \"ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8\": rpc error: code = NotFound desc = could not find container \"ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8\": container with ID starting with ed6abb6c9f70aef0a6528ab800b4ff63057bb674007c5325ba3483bda97f2fa8 not found: ID does not exist" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.144695 4825 scope.go:117] "RemoveContainer" containerID="372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b" Jan 22 15:50:38 crc kubenswrapper[4825]: E0122 15:50:38.145230 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b\": container with ID starting with 372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b not found: ID does not exist" containerID="372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.145272 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b"} err="failed to get container status \"372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b\": rpc error: code = NotFound desc = could not find container \"372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b\": container with ID starting with 372bcc5c3f873ffa178650746c4a5437987e5b6793394bbd225b5f644ffec15b not found: ID does not exist" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.145299 4825 scope.go:117] "RemoveContainer" containerID="5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701" Jan 22 15:50:38 crc kubenswrapper[4825]: E0122 15:50:38.145658 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701\": container with ID starting with 5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701 not found: ID does not exist" containerID="5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701" Jan 22 15:50:38 crc kubenswrapper[4825]: I0122 15:50:38.145681 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701"} err="failed to get container status \"5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701\": rpc error: code = NotFound desc = could not find container \"5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701\": container with ID starting with 5ee84cc84a66ff16cb139515f9da1a4c3b2f822cd7649862375d86ea62ddc701 not found: ID does not exist" Jan 22 15:50:39 crc kubenswrapper[4825]: I0122 15:50:39.276427 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 15:50:39 crc kubenswrapper[4825]: I0122 15:50:39.531081 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb70118-609c-4996-a406-81f8ee496151" path="/var/lib/kubelet/pods/dfb70118-609c-4996-a406-81f8ee496151/volumes" Jan 22 15:50:41 crc kubenswrapper[4825]: I0122 15:50:41.079420 4825 scope.go:117] "RemoveContainer" containerID="068f8f66108a9aeaf25df845bbf6b3e92de862d40a0e022c96fdecd65997b158" Jan 22 15:50:41 crc kubenswrapper[4825]: I0122 15:50:41.109266 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 15:50:50 crc kubenswrapper[4825]: I0122 15:50:50.171219 4825 generic.go:334] "Generic (PLEG): container finished" podID="07e8bf5e-6706-4987-8447-e918785d8f38" containerID="1d71de35f6df563122d2ff34f757c6b54cc4d27696e39307b75cae9669469539" exitCode=0 Jan 22 15:50:50 crc kubenswrapper[4825]: I0122 15:50:50.171313 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" event={"ID":"07e8bf5e-6706-4987-8447-e918785d8f38","Type":"ContainerDied","Data":"1d71de35f6df563122d2ff34f757c6b54cc4d27696e39307b75cae9669469539"} Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.774345 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.835842 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv8zb\" (UniqueName: \"kubernetes.io/projected/07e8bf5e-6706-4987-8447-e918785d8f38-kube-api-access-xv8zb\") pod \"07e8bf5e-6706-4987-8447-e918785d8f38\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.835939 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-ssh-key-openstack-edpm-ipam\") pod \"07e8bf5e-6706-4987-8447-e918785d8f38\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.836094 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-inventory\") pod \"07e8bf5e-6706-4987-8447-e918785d8f38\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.836321 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-repo-setup-combined-ca-bundle\") pod \"07e8bf5e-6706-4987-8447-e918785d8f38\" (UID: \"07e8bf5e-6706-4987-8447-e918785d8f38\") " Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.843413 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e8bf5e-6706-4987-8447-e918785d8f38-kube-api-access-xv8zb" (OuterVolumeSpecName: "kube-api-access-xv8zb") pod "07e8bf5e-6706-4987-8447-e918785d8f38" (UID: "07e8bf5e-6706-4987-8447-e918785d8f38"). InnerVolumeSpecName "kube-api-access-xv8zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.855151 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "07e8bf5e-6706-4987-8447-e918785d8f38" (UID: "07e8bf5e-6706-4987-8447-e918785d8f38"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.868884 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07e8bf5e-6706-4987-8447-e918785d8f38" (UID: "07e8bf5e-6706-4987-8447-e918785d8f38"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.895238 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-inventory" (OuterVolumeSpecName: "inventory") pod "07e8bf5e-6706-4987-8447-e918785d8f38" (UID: "07e8bf5e-6706-4987-8447-e918785d8f38"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.939261 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv8zb\" (UniqueName: \"kubernetes.io/projected/07e8bf5e-6706-4987-8447-e918785d8f38-kube-api-access-xv8zb\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.939336 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.939351 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:51 crc kubenswrapper[4825]: I0122 15:50:51.939366 4825 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e8bf5e-6706-4987-8447-e918785d8f38-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.196351 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.196227 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6" event={"ID":"07e8bf5e-6706-4987-8447-e918785d8f38","Type":"ContainerDied","Data":"7cbe5ea3f5df44ddb05529cd2c94ce321e5057a89452b2ccb24c871a33a3dec5"} Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.198308 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cbe5ea3f5df44ddb05529cd2c94ce321e5057a89452b2ccb24c871a33a3dec5" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.281268 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm"] Jan 22 15:50:52 crc kubenswrapper[4825]: E0122 15:50:52.290639 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="extract-content" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.290684 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="extract-content" Jan 22 15:50:52 crc kubenswrapper[4825]: E0122 15:50:52.290707 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="registry-server" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.290717 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="registry-server" Jan 22 15:50:52 crc kubenswrapper[4825]: E0122 15:50:52.290746 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="extract-utilities" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.290757 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="extract-utilities" Jan 22 15:50:52 crc kubenswrapper[4825]: E0122 15:50:52.290796 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e8bf5e-6706-4987-8447-e918785d8f38" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.290806 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e8bf5e-6706-4987-8447-e918785d8f38" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.291128 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb70118-609c-4996-a406-81f8ee496151" containerName="registry-server" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.291176 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e8bf5e-6706-4987-8447-e918785d8f38" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.292261 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.293489 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm"] Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.306670 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.306672 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.306742 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.306933 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.345607 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.345729 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5zm6\" (UniqueName: \"kubernetes.io/projected/c754afdb-51d7-442c-a0eb-baf47399beb7-kube-api-access-j5zm6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.345759 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.447651 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.447859 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5zm6\" (UniqueName: \"kubernetes.io/projected/c754afdb-51d7-442c-a0eb-baf47399beb7-kube-api-access-j5zm6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.447923 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.451846 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.451906 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.466893 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5zm6\" (UniqueName: \"kubernetes.io/projected/c754afdb-51d7-442c-a0eb-baf47399beb7-kube-api-access-j5zm6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6vtm\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:52 crc kubenswrapper[4825]: I0122 15:50:52.627754 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:50:53 crc kubenswrapper[4825]: I0122 15:50:53.249126 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm"] Jan 22 15:50:54 crc kubenswrapper[4825]: I0122 15:50:54.228462 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" event={"ID":"c754afdb-51d7-442c-a0eb-baf47399beb7","Type":"ContainerStarted","Data":"9adef738879690827533b2ce5b8d5a24a77080c2131b4fd1774a4465f5316eeb"} Jan 22 15:50:54 crc kubenswrapper[4825]: I0122 15:50:54.229074 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" event={"ID":"c754afdb-51d7-442c-a0eb-baf47399beb7","Type":"ContainerStarted","Data":"39e3bbcb9d212f3da3adf888ae22724c19c8c41252243df983d988a4ae2b5700"} Jan 22 15:50:54 crc kubenswrapper[4825]: I0122 15:50:54.257296 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" podStartSLOduration=1.748431909 podStartE2EDuration="2.257270157s" podCreationTimestamp="2026-01-22 15:50:52 +0000 UTC" firstStartedPulling="2026-01-22 15:50:53.255539829 +0000 UTC m=+1600.017066739" lastFinishedPulling="2026-01-22 15:50:53.764378077 +0000 UTC m=+1600.525904987" observedRunningTime="2026-01-22 15:50:54.244922539 +0000 UTC m=+1601.006449449" watchObservedRunningTime="2026-01-22 15:50:54.257270157 +0000 UTC m=+1601.018797067" Jan 22 15:50:58 crc kubenswrapper[4825]: I0122 15:50:58.365119 4825 generic.go:334] "Generic (PLEG): container finished" podID="c754afdb-51d7-442c-a0eb-baf47399beb7" containerID="9adef738879690827533b2ce5b8d5a24a77080c2131b4fd1774a4465f5316eeb" exitCode=0 Jan 22 15:50:58 crc kubenswrapper[4825]: I0122 15:50:58.365220 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" event={"ID":"c754afdb-51d7-442c-a0eb-baf47399beb7","Type":"ContainerDied","Data":"9adef738879690827533b2ce5b8d5a24a77080c2131b4fd1774a4465f5316eeb"} Jan 22 15:50:59 crc kubenswrapper[4825]: I0122 15:50:59.979990 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.100758 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-inventory\") pod \"c754afdb-51d7-442c-a0eb-baf47399beb7\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.100950 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-ssh-key-openstack-edpm-ipam\") pod \"c754afdb-51d7-442c-a0eb-baf47399beb7\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.101070 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5zm6\" (UniqueName: \"kubernetes.io/projected/c754afdb-51d7-442c-a0eb-baf47399beb7-kube-api-access-j5zm6\") pod \"c754afdb-51d7-442c-a0eb-baf47399beb7\" (UID: \"c754afdb-51d7-442c-a0eb-baf47399beb7\") " Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.107269 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c754afdb-51d7-442c-a0eb-baf47399beb7-kube-api-access-j5zm6" (OuterVolumeSpecName: "kube-api-access-j5zm6") pod "c754afdb-51d7-442c-a0eb-baf47399beb7" (UID: "c754afdb-51d7-442c-a0eb-baf47399beb7"). InnerVolumeSpecName "kube-api-access-j5zm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.138514 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c754afdb-51d7-442c-a0eb-baf47399beb7" (UID: "c754afdb-51d7-442c-a0eb-baf47399beb7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.141880 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-inventory" (OuterVolumeSpecName: "inventory") pod "c754afdb-51d7-442c-a0eb-baf47399beb7" (UID: "c754afdb-51d7-442c-a0eb-baf47399beb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.203723 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.203757 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c754afdb-51d7-442c-a0eb-baf47399beb7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.203767 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5zm6\" (UniqueName: \"kubernetes.io/projected/c754afdb-51d7-442c-a0eb-baf47399beb7-kube-api-access-j5zm6\") on node \"crc\" DevicePath \"\"" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.421130 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" event={"ID":"c754afdb-51d7-442c-a0eb-baf47399beb7","Type":"ContainerDied","Data":"39e3bbcb9d212f3da3adf888ae22724c19c8c41252243df983d988a4ae2b5700"} Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.421409 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e3bbcb9d212f3da3adf888ae22724c19c8c41252243df983d988a4ae2b5700" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.421489 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6vtm" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.494303 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6"] Jan 22 15:51:00 crc kubenswrapper[4825]: E0122 15:51:00.494863 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c754afdb-51d7-442c-a0eb-baf47399beb7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.494889 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c754afdb-51d7-442c-a0eb-baf47399beb7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.495166 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c754afdb-51d7-442c-a0eb-baf47399beb7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.496062 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.498110 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.498760 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.498930 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.499833 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.506753 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6"] Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.614383 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.614504 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.614708 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.615191 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfj7\" (UniqueName: \"kubernetes.io/projected/793f46d5-06cd-4273-9905-f235c6bc4f72-kube-api-access-thfj7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.716156 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thfj7\" (UniqueName: \"kubernetes.io/projected/793f46d5-06cd-4273-9905-f235c6bc4f72-kube-api-access-thfj7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.716263 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.716329 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.716414 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.721458 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.722018 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.722258 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.738777 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfj7\" (UniqueName: \"kubernetes.io/projected/793f46d5-06cd-4273-9905-f235c6bc4f72-kube-api-access-thfj7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:00 crc kubenswrapper[4825]: I0122 15:51:00.817720 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:51:01 crc kubenswrapper[4825]: I0122 15:51:01.450550 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6"] Jan 22 15:51:02 crc kubenswrapper[4825]: I0122 15:51:02.443302 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" event={"ID":"793f46d5-06cd-4273-9905-f235c6bc4f72","Type":"ContainerStarted","Data":"38aa415f270802e765575378466c02958780d82c88051cd9ef46d844cd43eaed"} Jan 22 15:51:02 crc kubenswrapper[4825]: I0122 15:51:02.443785 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" event={"ID":"793f46d5-06cd-4273-9905-f235c6bc4f72","Type":"ContainerStarted","Data":"b37f38def69a6d4a5774cd9337b53b5a55e68cf8a05c0c03f6dd64a6de11a3d6"} Jan 22 15:51:05 crc kubenswrapper[4825]: I0122 15:51:05.542342 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:51:05 crc kubenswrapper[4825]: I0122 15:51:05.542952 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.541450 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.542285 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.542392 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.543906 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.544114 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" gracePeriod=600 Jan 22 15:51:35 crc kubenswrapper[4825]: E0122 15:51:35.668767 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.881729 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" exitCode=0 Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.881786 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d"} Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.881897 4825 scope.go:117] "RemoveContainer" containerID="4f117d8aef866860d54f3d492ab55e9d654f82ddf841344db75dba9d26403f13" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.882533 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:51:35 crc kubenswrapper[4825]: E0122 15:51:35.883051 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:51:35 crc kubenswrapper[4825]: I0122 15:51:35.901892 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" podStartSLOduration=35.365170396 podStartE2EDuration="35.901871158s" podCreationTimestamp="2026-01-22 15:51:00 +0000 UTC" firstStartedPulling="2026-01-22 15:51:01.475784613 +0000 UTC m=+1608.237311533" lastFinishedPulling="2026-01-22 15:51:02.012485385 +0000 UTC m=+1608.774012295" observedRunningTime="2026-01-22 15:51:02.52440607 +0000 UTC m=+1609.285932990" watchObservedRunningTime="2026-01-22 15:51:35.901871158 +0000 UTC m=+1642.663398068" Jan 22 15:51:41 crc kubenswrapper[4825]: I0122 15:51:41.229546 4825 scope.go:117] "RemoveContainer" containerID="4e1e7f96f960964209e9e544f3bc9f3d759c931fae34cd9edbd63abc8db7dc81" Jan 22 15:51:41 crc kubenswrapper[4825]: I0122 15:51:41.264558 4825 scope.go:117] "RemoveContainer" containerID="53b3f499ed24bf51364d752406df5edab9a457a04f767fb65a743cff000ca988" Jan 22 15:51:41 crc kubenswrapper[4825]: I0122 15:51:41.310648 4825 scope.go:117] "RemoveContainer" containerID="533415d684514fcc5cb6e9b617c61baf7a74ce45888b4e89ca677e52b85e84cc" Jan 22 15:51:41 crc kubenswrapper[4825]: I0122 15:51:41.342236 4825 scope.go:117] "RemoveContainer" containerID="aea3f64dc09134f3d6a7d83c91edb8129195bd4464169db681f14320c5e9d088" Jan 22 15:51:41 crc kubenswrapper[4825]: I0122 15:51:41.398579 4825 scope.go:117] "RemoveContainer" containerID="a6111f4cbe06aa79016ce81ccf5f78226b8a584b6987659384641d40833ca7a4" Jan 22 15:51:41 crc kubenswrapper[4825]: I0122 15:51:41.432070 4825 scope.go:117] "RemoveContainer" containerID="3bf6addf318eb95513170034d30e4e63aae1a3cd41e3c9bccd7e69f832b5abbb" Jan 22 15:51:48 crc kubenswrapper[4825]: I0122 15:51:48.517970 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:51:48 crc kubenswrapper[4825]: E0122 15:51:48.519086 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:52:03 crc kubenswrapper[4825]: I0122 15:52:03.528610 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:52:03 crc kubenswrapper[4825]: E0122 15:52:03.529391 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:52:14 crc kubenswrapper[4825]: I0122 15:52:14.517751 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:52:14 crc kubenswrapper[4825]: E0122 15:52:14.518527 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:52:28 crc kubenswrapper[4825]: I0122 15:52:28.517438 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:52:28 crc kubenswrapper[4825]: E0122 15:52:28.518452 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:52:41 crc kubenswrapper[4825]: I0122 15:52:41.563482 4825 scope.go:117] "RemoveContainer" containerID="881ad51114925b82d3e2b575b5eac6570265bbe6ca88e75e9258c0acacbd9610" Jan 22 15:52:41 crc kubenswrapper[4825]: I0122 15:52:41.598588 4825 scope.go:117] "RemoveContainer" containerID="e6f34d6720d6385e444c1a136e378f4feaca6bafe499e1c0adc29b5efb9a2abc" Jan 22 15:52:41 crc kubenswrapper[4825]: I0122 15:52:41.799179 4825 scope.go:117] "RemoveContainer" containerID="2539e5f70d32e284c60e666842e9a5fc27657f9405215a009e723c1c7bcef665" Jan 22 15:52:41 crc kubenswrapper[4825]: I0122 15:52:41.885443 4825 scope.go:117] "RemoveContainer" containerID="4af9c6ff6f0bdd2490612b20f791224e5807b5ccd3db93754ff81ab8fc3a499a" Jan 22 15:52:42 crc kubenswrapper[4825]: I0122 15:52:42.517809 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:52:42 crc kubenswrapper[4825]: E0122 15:52:42.518115 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:52:56 crc kubenswrapper[4825]: I0122 15:52:56.517746 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:52:56 crc kubenswrapper[4825]: E0122 15:52:56.518632 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:53:09 crc kubenswrapper[4825]: I0122 15:53:09.518580 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:53:09 crc kubenswrapper[4825]: E0122 15:53:09.519426 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:53:17 crc kubenswrapper[4825]: I0122 15:53:17.074431 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-97nzk"] Jan 22 15:53:17 crc kubenswrapper[4825]: I0122 15:53:17.091105 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8e3f-account-create-update-4b6j6"] Jan 22 15:53:17 crc kubenswrapper[4825]: I0122 15:53:17.106350 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-97nzk"] Jan 22 15:53:17 crc kubenswrapper[4825]: I0122 15:53:17.113398 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8e3f-account-create-update-4b6j6"] Jan 22 15:53:17 crc kubenswrapper[4825]: I0122 15:53:17.532233 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37522712-d1ed-4a4d-ae99-c8fa95502dc1" path="/var/lib/kubelet/pods/37522712-d1ed-4a4d-ae99-c8fa95502dc1/volumes" Jan 22 15:53:17 crc kubenswrapper[4825]: I0122 15:53:17.533517 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b017023-1c9c-4ff7-9f21-8370aa38cc26" path="/var/lib/kubelet/pods/3b017023-1c9c-4ff7-9f21-8370aa38cc26/volumes" Jan 22 15:53:20 crc kubenswrapper[4825]: I0122 15:53:20.517687 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:53:20 crc kubenswrapper[4825]: E0122 15:53:20.518209 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.050828 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8142-account-create-update-znkfn"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.064490 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-cl5vd"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.074942 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1cd2-account-create-update-8wkl6"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.103248 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-zq89m"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.112955 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1cd2-account-create-update-8wkl6"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.123195 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8142-account-create-update-znkfn"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.133355 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-cl5vd"] Jan 22 15:53:24 crc kubenswrapper[4825]: I0122 15:53:24.142867 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-zq89m"] Jan 22 15:53:25 crc kubenswrapper[4825]: I0122 15:53:25.533098 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d465c3-ddd8-4a39-8b52-6df888237aa0" path="/var/lib/kubelet/pods/08d465c3-ddd8-4a39-8b52-6df888237aa0/volumes" Jan 22 15:53:25 crc kubenswrapper[4825]: I0122 15:53:25.534843 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e423be-52ff-4474-af1b-472639d2b618" path="/var/lib/kubelet/pods/34e423be-52ff-4474-af1b-472639d2b618/volumes" Jan 22 15:53:25 crc kubenswrapper[4825]: I0122 15:53:25.536099 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecaf74d-a8a9-4fd4-91dc-841debd0df4c" path="/var/lib/kubelet/pods/5ecaf74d-a8a9-4fd4-91dc-841debd0df4c/volumes" Jan 22 15:53:25 crc kubenswrapper[4825]: I0122 15:53:25.537047 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ceb787-de8b-4252-981e-818c4ca7c79c" path="/var/lib/kubelet/pods/d2ceb787-de8b-4252-981e-818c4ca7c79c/volumes" Jan 22 15:53:31 crc kubenswrapper[4825]: I0122 15:53:31.517973 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:53:31 crc kubenswrapper[4825]: E0122 15:53:31.518879 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:53:36 crc kubenswrapper[4825]: I0122 15:53:36.050437 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7xb69"] Jan 22 15:53:36 crc kubenswrapper[4825]: I0122 15:53:36.064288 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cqc55"] Jan 22 15:53:36 crc kubenswrapper[4825]: I0122 15:53:36.073835 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5c47-account-create-update-qrfz5"] Jan 22 15:53:36 crc kubenswrapper[4825]: I0122 15:53:36.083372 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7xb69"] Jan 22 15:53:36 crc kubenswrapper[4825]: I0122 15:53:36.093211 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cqc55"] Jan 22 15:53:36 crc kubenswrapper[4825]: I0122 15:53:36.102942 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5c47-account-create-update-qrfz5"] Jan 22 15:53:37 crc kubenswrapper[4825]: I0122 15:53:37.542559 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a6798c-df21-4a31-a652-836868719f0e" path="/var/lib/kubelet/pods/50a6798c-df21-4a31-a652-836868719f0e/volumes" Jan 22 15:53:37 crc kubenswrapper[4825]: I0122 15:53:37.544097 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b7bf140-ffaa-4907-a659-0e00718698e0" path="/var/lib/kubelet/pods/8b7bf140-ffaa-4907-a659-0e00718698e0/volumes" Jan 22 15:53:37 crc kubenswrapper[4825]: I0122 15:53:37.545182 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1de7e5d-05c7-4b56-9896-41aece1133fe" path="/var/lib/kubelet/pods/a1de7e5d-05c7-4b56-9896-41aece1133fe/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.050431 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5835-account-create-update-ps6lh"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.062998 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-qjswp"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.073955 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wqxdd"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.086026 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-36b5-account-create-update-bm8dq"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.100310 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5835-account-create-update-ps6lh"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.113236 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kdjdn"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.123453 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-qjswp"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.132880 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wqxdd"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.142312 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-36b5-account-create-update-bm8dq"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.151294 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-5da4-account-create-update-67kjm"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.161880 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kdjdn"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.172488 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-5da4-account-create-update-67kjm"] Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.535464 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d719b91-8720-4e1c-9be9-a94cf9f3b15c" path="/var/lib/kubelet/pods/5d719b91-8720-4e1c-9be9-a94cf9f3b15c/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.536252 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c69ccb-9024-4ce2-bb24-04640babc65c" path="/var/lib/kubelet/pods/66c69ccb-9024-4ce2-bb24-04640babc65c/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.536879 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859cf314-a4cb-4952-8445-91d01ee03ca9" path="/var/lib/kubelet/pods/859cf314-a4cb-4952-8445-91d01ee03ca9/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.537548 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933073b1-0e88-4f40-9c8d-12050b5ccc0a" path="/var/lib/kubelet/pods/933073b1-0e88-4f40-9c8d-12050b5ccc0a/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.539102 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea3602a-5333-49e3-ba9f-041bdd79218f" path="/var/lib/kubelet/pods/eea3602a-5333-49e3-ba9f-041bdd79218f/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.539811 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b5aaf3-84d2-47b2-8d21-684f354813f8" path="/var/lib/kubelet/pods/f9b5aaf3-84d2-47b2-8d21-684f354813f8/volumes" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.961419 4825 scope.go:117] "RemoveContainer" containerID="9c942a2d0f8f8c7ebdeac5233b4f19f183a6ffd6d67d42b9672aa9b0b7f9341f" Jan 22 15:53:41 crc kubenswrapper[4825]: I0122 15:53:41.994140 4825 scope.go:117] "RemoveContainer" containerID="4f52553f8b9f2a304e23d98027383e178b8a7739cd54e0b35aa80646129ac535" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.059749 4825 scope.go:117] "RemoveContainer" containerID="2444970279a7d3f2048b11a29bbf6a1a801c6bcf6a16405f6fac3b7d2e96ce41" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.113076 4825 scope.go:117] "RemoveContainer" containerID="5394e21bb385486cb59d0d6353285a35001bf791a5d3bc46ed811fb2f4a65735" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.168866 4825 scope.go:117] "RemoveContainer" containerID="7b6ca5ed145bea05108f82644894e5ea2d1ddde556c8bf3c9b95f4fa7bb5305b" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.218761 4825 scope.go:117] "RemoveContainer" containerID="dc93ad5f17a6a9e656efe285f923d56a228e07122a965de1d95fcd0c5d62206c" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.269395 4825 scope.go:117] "RemoveContainer" containerID="e48b4f16f89f942f656c4a1757c78d7c8be3f12f2171c66bdf76485e428d5837" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.316832 4825 scope.go:117] "RemoveContainer" containerID="b8658e534f63c9619e756afd71f9871aad8589a5a2c630e250da84bb07b23c31" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.345964 4825 scope.go:117] "RemoveContainer" containerID="3e2a7a9b753421aa14d7c0941c26bc0857666c5b29001d65ec2b8f8f6b4b7356" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.368633 4825 scope.go:117] "RemoveContainer" containerID="1934faf63fe73508d347b397d946ee591db6489ba2cf4030a4a659275bec7d37" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.389417 4825 scope.go:117] "RemoveContainer" containerID="c7e6649f64e637fce0f75a6b9e896c0997d641a5d1ed7c892bda4b969dc8fb10" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.412210 4825 scope.go:117] "RemoveContainer" containerID="4c1f0153b65bc5175c312c4cd719e4fe55c8b1161aecc54033ef59e240dc7ac3" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.436315 4825 scope.go:117] "RemoveContainer" containerID="a6f65147ae431a301e01665f4fa6719ac9c36404487cc4fbfc7dce8a36239a79" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.457667 4825 scope.go:117] "RemoveContainer" containerID="190abddd8a55a3fe35d6e0419f6953c9e82ba75e24580202d90e0a38e48e7c21" Jan 22 15:53:42 crc kubenswrapper[4825]: I0122 15:53:42.483575 4825 scope.go:117] "RemoveContainer" containerID="29ece37c41db9dbe8d417e698cd0ba7d05e1cf94cfc76b185ee87012c12104e5" Jan 22 15:53:46 crc kubenswrapper[4825]: I0122 15:53:46.517182 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:53:46 crc kubenswrapper[4825]: E0122 15:53:46.517950 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:53:57 crc kubenswrapper[4825]: I0122 15:53:57.043742 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2gv7p"] Jan 22 15:53:57 crc kubenswrapper[4825]: I0122 15:53:57.054306 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2gv7p"] Jan 22 15:53:57 crc kubenswrapper[4825]: I0122 15:53:57.530380 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901e6ba7-980d-4fee-acbd-5aa8314aed8e" path="/var/lib/kubelet/pods/901e6ba7-980d-4fee-acbd-5aa8314aed8e/volumes" Jan 22 15:54:00 crc kubenswrapper[4825]: I0122 15:54:00.517714 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:54:00 crc kubenswrapper[4825]: E0122 15:54:00.518221 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:54:12 crc kubenswrapper[4825]: I0122 15:54:12.518177 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:54:12 crc kubenswrapper[4825]: E0122 15:54:12.519599 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:54:25 crc kubenswrapper[4825]: I0122 15:54:25.588808 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:54:25 crc kubenswrapper[4825]: E0122 15:54:25.589463 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:54:28 crc kubenswrapper[4825]: I0122 15:54:28.076292 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r6p27"] Jan 22 15:54:28 crc kubenswrapper[4825]: I0122 15:54:28.223817 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r6p27"] Jan 22 15:54:28 crc kubenswrapper[4825]: I0122 15:54:28.235066 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-72pdg"] Jan 22 15:54:28 crc kubenswrapper[4825]: I0122 15:54:28.247667 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-72pdg"] Jan 22 15:54:29 crc kubenswrapper[4825]: I0122 15:54:29.533038 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc53412-3b60-473b-a918-df5872264c8e" path="/var/lib/kubelet/pods/fbc53412-3b60-473b-a918-df5872264c8e/volumes" Jan 22 15:54:29 crc kubenswrapper[4825]: I0122 15:54:29.534463 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcfdefe-f831-469c-9423-6cd4399435a7" path="/var/lib/kubelet/pods/ffcfdefe-f831-469c-9423-6cd4399435a7/volumes" Jan 22 15:54:30 crc kubenswrapper[4825]: I0122 15:54:30.023453 4825 generic.go:334] "Generic (PLEG): container finished" podID="793f46d5-06cd-4273-9905-f235c6bc4f72" containerID="38aa415f270802e765575378466c02958780d82c88051cd9ef46d844cd43eaed" exitCode=0 Jan 22 15:54:30 crc kubenswrapper[4825]: I0122 15:54:30.023507 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" event={"ID":"793f46d5-06cd-4273-9905-f235c6bc4f72","Type":"ContainerDied","Data":"38aa415f270802e765575378466c02958780d82c88051cd9ef46d844cd43eaed"} Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.509239 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.698193 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-ssh-key-openstack-edpm-ipam\") pod \"793f46d5-06cd-4273-9905-f235c6bc4f72\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.698367 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thfj7\" (UniqueName: \"kubernetes.io/projected/793f46d5-06cd-4273-9905-f235c6bc4f72-kube-api-access-thfj7\") pod \"793f46d5-06cd-4273-9905-f235c6bc4f72\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.698429 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-inventory\") pod \"793f46d5-06cd-4273-9905-f235c6bc4f72\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.698720 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-bootstrap-combined-ca-bundle\") pod \"793f46d5-06cd-4273-9905-f235c6bc4f72\" (UID: \"793f46d5-06cd-4273-9905-f235c6bc4f72\") " Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.705937 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "793f46d5-06cd-4273-9905-f235c6bc4f72" (UID: "793f46d5-06cd-4273-9905-f235c6bc4f72"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.713275 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793f46d5-06cd-4273-9905-f235c6bc4f72-kube-api-access-thfj7" (OuterVolumeSpecName: "kube-api-access-thfj7") pod "793f46d5-06cd-4273-9905-f235c6bc4f72" (UID: "793f46d5-06cd-4273-9905-f235c6bc4f72"). InnerVolumeSpecName "kube-api-access-thfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.745370 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "793f46d5-06cd-4273-9905-f235c6bc4f72" (UID: "793f46d5-06cd-4273-9905-f235c6bc4f72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.747779 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-inventory" (OuterVolumeSpecName: "inventory") pod "793f46d5-06cd-4273-9905-f235c6bc4f72" (UID: "793f46d5-06cd-4273-9905-f235c6bc4f72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.802699 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thfj7\" (UniqueName: \"kubernetes.io/projected/793f46d5-06cd-4273-9905-f235c6bc4f72-kube-api-access-thfj7\") on node \"crc\" DevicePath \"\"" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.802757 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.802776 4825 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 15:54:31 crc kubenswrapper[4825]: I0122 15:54:31.802794 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/793f46d5-06cd-4273-9905-f235c6bc4f72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.043186 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" event={"ID":"793f46d5-06cd-4273-9905-f235c6bc4f72","Type":"ContainerDied","Data":"b37f38def69a6d4a5774cd9337b53b5a55e68cf8a05c0c03f6dd64a6de11a3d6"} Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.043228 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37f38def69a6d4a5774cd9337b53b5a55e68cf8a05c0c03f6dd64a6de11a3d6" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.043270 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.141692 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc"] Jan 22 15:54:32 crc kubenswrapper[4825]: E0122 15:54:32.143192 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793f46d5-06cd-4273-9905-f235c6bc4f72" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.143219 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="793f46d5-06cd-4273-9905-f235c6bc4f72" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.143532 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="793f46d5-06cd-4273-9905-f235c6bc4f72" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.144437 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.151904 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.152146 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.152427 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.153254 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.158369 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc"] Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.327202 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pjl\" (UniqueName: \"kubernetes.io/projected/95d9e491-a6ee-43ee-bdee-a94b23e1e510-kube-api-access-n8pjl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.327290 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.327349 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.429635 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pjl\" (UniqueName: \"kubernetes.io/projected/95d9e491-a6ee-43ee-bdee-a94b23e1e510-kube-api-access-n8pjl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.429770 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.429817 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.434507 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.435207 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.451971 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pjl\" (UniqueName: \"kubernetes.io/projected/95d9e491-a6ee-43ee-bdee-a94b23e1e510-kube-api-access-n8pjl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:32 crc kubenswrapper[4825]: I0122 15:54:32.465999 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:54:33 crc kubenswrapper[4825]: I0122 15:54:33.164949 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc"] Jan 22 15:54:33 crc kubenswrapper[4825]: W0122 15:54:33.171715 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95d9e491_a6ee_43ee_bdee_a94b23e1e510.slice/crio-907dbe4f55e716c2ee094c7de1312feb2aa7d633a62dece2690aecd77a795acc WatchSource:0}: Error finding container 907dbe4f55e716c2ee094c7de1312feb2aa7d633a62dece2690aecd77a795acc: Status 404 returned error can't find the container with id 907dbe4f55e716c2ee094c7de1312feb2aa7d633a62dece2690aecd77a795acc Jan 22 15:54:33 crc kubenswrapper[4825]: I0122 15:54:33.176084 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 15:54:34 crc kubenswrapper[4825]: I0122 15:54:34.066418 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" event={"ID":"95d9e491-a6ee-43ee-bdee-a94b23e1e510","Type":"ContainerStarted","Data":"275c053c694114c02052f72b31728870d2a048a11f20321203cdb0439e269a64"} Jan 22 15:54:34 crc kubenswrapper[4825]: I0122 15:54:34.066949 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" event={"ID":"95d9e491-a6ee-43ee-bdee-a94b23e1e510","Type":"ContainerStarted","Data":"907dbe4f55e716c2ee094c7de1312feb2aa7d633a62dece2690aecd77a795acc"} Jan 22 15:54:34 crc kubenswrapper[4825]: I0122 15:54:34.091884 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" podStartSLOduration=1.4706950220000001 podStartE2EDuration="2.09184911s" podCreationTimestamp="2026-01-22 15:54:32 +0000 UTC" firstStartedPulling="2026-01-22 15:54:33.175769453 +0000 UTC m=+1819.937296363" lastFinishedPulling="2026-01-22 15:54:33.796923541 +0000 UTC m=+1820.558450451" observedRunningTime="2026-01-22 15:54:34.090211094 +0000 UTC m=+1820.851738004" watchObservedRunningTime="2026-01-22 15:54:34.09184911 +0000 UTC m=+1820.853376020" Jan 22 15:54:40 crc kubenswrapper[4825]: I0122 15:54:40.517347 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:54:40 crc kubenswrapper[4825]: E0122 15:54:40.519537 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:54:42 crc kubenswrapper[4825]: I0122 15:54:42.840320 4825 scope.go:117] "RemoveContainer" containerID="1b01bcb1986270592b766c84c6e3eabc89d23db2ef19539734c15a0ca0f3eecb" Jan 22 15:54:42 crc kubenswrapper[4825]: I0122 15:54:42.866638 4825 scope.go:117] "RemoveContainer" containerID="e18cf9e77c5bc94544ef3c1a26b9abea5ff78a0a040ad43c4d195ff82daa2664" Jan 22 15:54:42 crc kubenswrapper[4825]: I0122 15:54:42.923242 4825 scope.go:117] "RemoveContainer" containerID="d474c70c077c3c1a66580d8ea3df936ae81927c8b3bd1312fa5f979e46fb0e71" Jan 22 15:54:42 crc kubenswrapper[4825]: I0122 15:54:42.979746 4825 scope.go:117] "RemoveContainer" containerID="cb3d640515e4d069a3aee78de8fedc7cc0fece72c5a788bdabf59dfd2a32b789" Jan 22 15:54:43 crc kubenswrapper[4825]: I0122 15:54:43.008087 4825 scope.go:117] "RemoveContainer" containerID="a7f7a7a114493dfdb4ad180cfca393e175a61a2114ffb28e6e67efd950d9dbf7" Jan 22 15:54:50 crc kubenswrapper[4825]: I0122 15:54:50.144898 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t8c8z"] Jan 22 15:54:50 crc kubenswrapper[4825]: I0122 15:54:50.153595 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t8c8z"] Jan 22 15:54:51 crc kubenswrapper[4825]: I0122 15:54:51.532906 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2480086-9709-4e61-af71-042055623d32" path="/var/lib/kubelet/pods/d2480086-9709-4e61-af71-042055623d32/volumes" Jan 22 15:54:52 crc kubenswrapper[4825]: I0122 15:54:52.030888 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6fxh4"] Jan 22 15:54:52 crc kubenswrapper[4825]: I0122 15:54:52.041733 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kwfvj"] Jan 22 15:54:52 crc kubenswrapper[4825]: I0122 15:54:52.055259 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6fxh4"] Jan 22 15:54:52 crc kubenswrapper[4825]: I0122 15:54:52.065643 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kwfvj"] Jan 22 15:54:53 crc kubenswrapper[4825]: I0122 15:54:53.529512 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:54:53 crc kubenswrapper[4825]: E0122 15:54:53.530632 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:54:53 crc kubenswrapper[4825]: I0122 15:54:53.532340 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589c7924-baff-443f-b923-59a1348c709a" path="/var/lib/kubelet/pods/589c7924-baff-443f-b923-59a1348c709a/volumes" Jan 22 15:54:53 crc kubenswrapper[4825]: I0122 15:54:53.534504 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ca6d27-7a63-4ba6-8baf-8180c79cd810" path="/var/lib/kubelet/pods/b3ca6d27-7a63-4ba6-8baf-8180c79cd810/volumes" Jan 22 15:54:58 crc kubenswrapper[4825]: I0122 15:54:58.043532 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-54vjv"] Jan 22 15:54:58 crc kubenswrapper[4825]: I0122 15:54:58.056153 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-54vjv"] Jan 22 15:54:59 crc kubenswrapper[4825]: I0122 15:54:59.529126 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7211decb-e02d-47e6-9ea7-493e8e6a3743" path="/var/lib/kubelet/pods/7211decb-e02d-47e6-9ea7-493e8e6a3743/volumes" Jan 22 15:55:05 crc kubenswrapper[4825]: I0122 15:55:05.517527 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:55:05 crc kubenswrapper[4825]: E0122 15:55:05.518418 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:55:20 crc kubenswrapper[4825]: I0122 15:55:20.517923 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:55:20 crc kubenswrapper[4825]: E0122 15:55:20.518915 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:55:35 crc kubenswrapper[4825]: I0122 15:55:35.518084 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:55:35 crc kubenswrapper[4825]: E0122 15:55:35.519008 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:55:43 crc kubenswrapper[4825]: I0122 15:55:43.131066 4825 scope.go:117] "RemoveContainer" containerID="0b7199d4207aec152ef2e116aa6cbce9b1ea29c3a44c091ea6c89c8c47195b96" Jan 22 15:55:43 crc kubenswrapper[4825]: I0122 15:55:43.186077 4825 scope.go:117] "RemoveContainer" containerID="2e6b7f886ffb9b325b3310026c9c248243d9843bf701112f7ebfb5473836de6e" Jan 22 15:55:43 crc kubenswrapper[4825]: I0122 15:55:43.415328 4825 scope.go:117] "RemoveContainer" containerID="ee2541af36c810a355aaa1df0357b556a6b23dba2e53bd88ec8fd232eb6ec607" Jan 22 15:55:43 crc kubenswrapper[4825]: I0122 15:55:43.440243 4825 scope.go:117] "RemoveContainer" containerID="6e796a891254fd632b2264705b6587b4e0bc2d248b5a72d2ec807c8d34d00b15" Jan 22 15:55:43 crc kubenswrapper[4825]: I0122 15:55:43.489596 4825 scope.go:117] "RemoveContainer" containerID="1ce120a81de5582340734a40599a7e8e65c947a0c732dfa76d07436a8232dab5" Jan 22 15:55:43 crc kubenswrapper[4825]: I0122 15:55:43.578176 4825 scope.go:117] "RemoveContainer" containerID="da8359e16e8b3afdd697e650baf9a6e11b55dbb0ad4162ee3c3de07649000b41" Jan 22 15:55:49 crc kubenswrapper[4825]: I0122 15:55:49.517624 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:55:49 crc kubenswrapper[4825]: E0122 15:55:49.518709 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:56:02 crc kubenswrapper[4825]: I0122 15:56:02.517492 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:56:02 crc kubenswrapper[4825]: E0122 15:56:02.518330 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.054676 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pxhlf"] Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.067330 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-brf2x"] Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.078986 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9xbqd"] Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.090261 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pxhlf"] Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.101571 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-brf2x"] Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.111772 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9xbqd"] Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.538161 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:56:13 crc kubenswrapper[4825]: E0122 15:56:13.538754 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.542548 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b51ad21-4de0-4a11-9859-c69b78c5c9fe" path="/var/lib/kubelet/pods/2b51ad21-4de0-4a11-9859-c69b78c5c9fe/volumes" Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.543443 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4278500e-8eaf-47d6-a746-d23a33cc2603" path="/var/lib/kubelet/pods/4278500e-8eaf-47d6-a746-d23a33cc2603/volumes" Jan 22 15:56:13 crc kubenswrapper[4825]: I0122 15:56:13.544255 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b64a93-b139-429e-87fc-28428abaf0f5" path="/var/lib/kubelet/pods/d8b64a93-b139-429e-87fc-28428abaf0f5/volumes" Jan 22 15:56:14 crc kubenswrapper[4825]: I0122 15:56:14.039417 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a2df-account-create-update-m6twc"] Jan 22 15:56:14 crc kubenswrapper[4825]: I0122 15:56:14.052443 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a651-account-create-update-mkmn4"] Jan 22 15:56:14 crc kubenswrapper[4825]: I0122 15:56:14.061854 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a2df-account-create-update-m6twc"] Jan 22 15:56:14 crc kubenswrapper[4825]: I0122 15:56:14.070076 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a651-account-create-update-mkmn4"] Jan 22 15:56:15 crc kubenswrapper[4825]: I0122 15:56:15.035216 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b29e-account-create-update-56crs"] Jan 22 15:56:15 crc kubenswrapper[4825]: I0122 15:56:15.048185 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b29e-account-create-update-56crs"] Jan 22 15:56:15 crc kubenswrapper[4825]: I0122 15:56:15.531478 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc0009b-f413-4c14-829a-e3ffa344de3c" path="/var/lib/kubelet/pods/4fc0009b-f413-4c14-829a-e3ffa344de3c/volumes" Jan 22 15:56:15 crc kubenswrapper[4825]: I0122 15:56:15.534050 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686cc9da-301b-40e3-ae00-165f01c28654" path="/var/lib/kubelet/pods/686cc9da-301b-40e3-ae00-165f01c28654/volumes" Jan 22 15:56:15 crc kubenswrapper[4825]: I0122 15:56:15.535341 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ae6c15-5c71-4c86-ac9f-1df49436e099" path="/var/lib/kubelet/pods/85ae6c15-5c71-4c86-ac9f-1df49436e099/volumes" Jan 22 15:56:25 crc kubenswrapper[4825]: I0122 15:56:25.528952 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:56:25 crc kubenswrapper[4825]: E0122 15:56:25.529972 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 15:56:39 crc kubenswrapper[4825]: I0122 15:56:39.517196 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 15:56:40 crc kubenswrapper[4825]: I0122 15:56:40.037852 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c"} Jan 22 15:56:43 crc kubenswrapper[4825]: I0122 15:56:43.782445 4825 scope.go:117] "RemoveContainer" containerID="824b7b6567052d092268994afb7b5d5cf841f62c83005488341eb14624cebce8" Jan 22 15:56:43 crc kubenswrapper[4825]: I0122 15:56:43.806649 4825 scope.go:117] "RemoveContainer" containerID="0dac02d39650005d6d6b79813baff509f0f3238c0c0d13242237a506fd455b95" Jan 22 15:56:43 crc kubenswrapper[4825]: I0122 15:56:43.861539 4825 scope.go:117] "RemoveContainer" containerID="1047a606b1ee037b80d24b9ee14842b9a940b706b46850845f6d31e8cb357cea" Jan 22 15:56:43 crc kubenswrapper[4825]: I0122 15:56:43.920728 4825 scope.go:117] "RemoveContainer" containerID="4de9cf3ef4f101bd4783b164914d09f56db733c890e5b78d4da7a509dab72f61" Jan 22 15:56:43 crc kubenswrapper[4825]: I0122 15:56:43.968240 4825 scope.go:117] "RemoveContainer" containerID="f4af1435c38998a58198434b0dd1bd9a82c869f68719dc9fc4ae0fd2dd2e9cf9" Jan 22 15:56:44 crc kubenswrapper[4825]: I0122 15:56:44.013285 4825 scope.go:117] "RemoveContainer" containerID="204018252ace59351294f08b4034ade05e14d1d8787d3fb1ef31951838542bef" Jan 22 15:56:54 crc kubenswrapper[4825]: I0122 15:56:54.225370 4825 generic.go:334] "Generic (PLEG): container finished" podID="95d9e491-a6ee-43ee-bdee-a94b23e1e510" containerID="275c053c694114c02052f72b31728870d2a048a11f20321203cdb0439e269a64" exitCode=0 Jan 22 15:56:54 crc kubenswrapper[4825]: I0122 15:56:54.225442 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" event={"ID":"95d9e491-a6ee-43ee-bdee-a94b23e1e510","Type":"ContainerDied","Data":"275c053c694114c02052f72b31728870d2a048a11f20321203cdb0439e269a64"} Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.794433 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.898602 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-ssh-key-openstack-edpm-ipam\") pod \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.898872 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-inventory\") pod \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.899107 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8pjl\" (UniqueName: \"kubernetes.io/projected/95d9e491-a6ee-43ee-bdee-a94b23e1e510-kube-api-access-n8pjl\") pod \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\" (UID: \"95d9e491-a6ee-43ee-bdee-a94b23e1e510\") " Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.905214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d9e491-a6ee-43ee-bdee-a94b23e1e510-kube-api-access-n8pjl" (OuterVolumeSpecName: "kube-api-access-n8pjl") pod "95d9e491-a6ee-43ee-bdee-a94b23e1e510" (UID: "95d9e491-a6ee-43ee-bdee-a94b23e1e510"). InnerVolumeSpecName "kube-api-access-n8pjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.929131 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "95d9e491-a6ee-43ee-bdee-a94b23e1e510" (UID: "95d9e491-a6ee-43ee-bdee-a94b23e1e510"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:56:55 crc kubenswrapper[4825]: I0122 15:56:55.929479 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-inventory" (OuterVolumeSpecName: "inventory") pod "95d9e491-a6ee-43ee-bdee-a94b23e1e510" (UID: "95d9e491-a6ee-43ee-bdee-a94b23e1e510"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.002174 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.002217 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95d9e491-a6ee-43ee-bdee-a94b23e1e510-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.002235 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pjl\" (UniqueName: \"kubernetes.io/projected/95d9e491-a6ee-43ee-bdee-a94b23e1e510-kube-api-access-n8pjl\") on node \"crc\" DevicePath \"\"" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.258590 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" event={"ID":"95d9e491-a6ee-43ee-bdee-a94b23e1e510","Type":"ContainerDied","Data":"907dbe4f55e716c2ee094c7de1312feb2aa7d633a62dece2690aecd77a795acc"} Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.258653 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="907dbe4f55e716c2ee094c7de1312feb2aa7d633a62dece2690aecd77a795acc" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.258786 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.343896 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs"] Jan 22 15:56:56 crc kubenswrapper[4825]: E0122 15:56:56.344559 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d9e491-a6ee-43ee-bdee-a94b23e1e510" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.344582 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d9e491-a6ee-43ee-bdee-a94b23e1e510" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.344904 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d9e491-a6ee-43ee-bdee-a94b23e1e510" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.346193 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.349530 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.353250 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.353912 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.353912 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.375070 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs"] Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.513388 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrq46\" (UniqueName: \"kubernetes.io/projected/c37b521d-eed3-4bfc-895d-f8349240a58b-kube-api-access-qrq46\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.513528 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.513598 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.616426 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.616725 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.617623 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrq46\" (UniqueName: \"kubernetes.io/projected/c37b521d-eed3-4bfc-895d-f8349240a58b-kube-api-access-qrq46\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.630883 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.630992 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.636559 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrq46\" (UniqueName: \"kubernetes.io/projected/c37b521d-eed3-4bfc-895d-f8349240a58b-kube-api-access-qrq46\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:56 crc kubenswrapper[4825]: I0122 15:56:56.666024 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:56:57 crc kubenswrapper[4825]: I0122 15:56:57.257513 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs"] Jan 22 15:56:58 crc kubenswrapper[4825]: I0122 15:56:58.276624 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" event={"ID":"c37b521d-eed3-4bfc-895d-f8349240a58b","Type":"ContainerStarted","Data":"59398d1f4a24ad062d86026cb8ae67dd74cb0b5ea6e979e271c11e5bba9e3dd6"} Jan 22 15:56:58 crc kubenswrapper[4825]: I0122 15:56:58.277164 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" event={"ID":"c37b521d-eed3-4bfc-895d-f8349240a58b","Type":"ContainerStarted","Data":"554af2f85dc234fc1d4bc023d22bff64c8aaffc3a5728c76b1336f6595e72b75"} Jan 22 15:56:58 crc kubenswrapper[4825]: I0122 15:56:58.299647 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" podStartSLOduration=1.872061438 podStartE2EDuration="2.299601992s" podCreationTimestamp="2026-01-22 15:56:56 +0000 UTC" firstStartedPulling="2026-01-22 15:56:57.265663073 +0000 UTC m=+1964.027189973" lastFinishedPulling="2026-01-22 15:56:57.693203617 +0000 UTC m=+1964.454730527" observedRunningTime="2026-01-22 15:56:58.296400041 +0000 UTC m=+1965.057926961" watchObservedRunningTime="2026-01-22 15:56:58.299601992 +0000 UTC m=+1965.061128902" Jan 22 15:57:05 crc kubenswrapper[4825]: I0122 15:57:05.051437 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9vs7x"] Jan 22 15:57:05 crc kubenswrapper[4825]: I0122 15:57:05.064841 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9vs7x"] Jan 22 15:57:05 crc kubenswrapper[4825]: I0122 15:57:05.530302 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a02e958-e76f-4351-bf61-0b0f4ec0e410" path="/var/lib/kubelet/pods/7a02e958-e76f-4351-bf61-0b0f4ec0e410/volumes" Jan 22 15:57:40 crc kubenswrapper[4825]: I0122 15:57:40.055281 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-q7pvf"] Jan 22 15:57:40 crc kubenswrapper[4825]: I0122 15:57:40.068094 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-q7pvf"] Jan 22 15:57:41 crc kubenswrapper[4825]: I0122 15:57:41.537888 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5eac43-e644-430f-b4c7-8003b6984a30" path="/var/lib/kubelet/pods/1d5eac43-e644-430f-b4c7-8003b6984a30/volumes" Jan 22 15:57:44 crc kubenswrapper[4825]: I0122 15:57:44.029431 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qbh5"] Jan 22 15:57:44 crc kubenswrapper[4825]: I0122 15:57:44.039378 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8qbh5"] Jan 22 15:57:44 crc kubenswrapper[4825]: I0122 15:57:44.207184 4825 scope.go:117] "RemoveContainer" containerID="87b6a24946b6b30dde93bac2c3870a45c6830f605f758a59f59dc31ae76ecb92" Jan 22 15:57:44 crc kubenswrapper[4825]: I0122 15:57:44.340422 4825 scope.go:117] "RemoveContainer" containerID="76444e0def286411aaecb5e2b7b368677ced5dbdea1b6f1ab77a176368441ccb" Jan 22 15:57:45 crc kubenswrapper[4825]: I0122 15:57:45.590157 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3802a459-6af8-4a3f-8087-529583d75594" path="/var/lib/kubelet/pods/3802a459-6af8-4a3f-8087-529583d75594/volumes" Jan 22 15:58:11 crc kubenswrapper[4825]: I0122 15:58:11.173088 4825 generic.go:334] "Generic (PLEG): container finished" podID="c37b521d-eed3-4bfc-895d-f8349240a58b" containerID="59398d1f4a24ad062d86026cb8ae67dd74cb0b5ea6e979e271c11e5bba9e3dd6" exitCode=0 Jan 22 15:58:11 crc kubenswrapper[4825]: I0122 15:58:11.173181 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" event={"ID":"c37b521d-eed3-4bfc-895d-f8349240a58b","Type":"ContainerDied","Data":"59398d1f4a24ad062d86026cb8ae67dd74cb0b5ea6e979e271c11e5bba9e3dd6"} Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.756187 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.831870 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-inventory\") pod \"c37b521d-eed3-4bfc-895d-f8349240a58b\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.832062 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-ssh-key-openstack-edpm-ipam\") pod \"c37b521d-eed3-4bfc-895d-f8349240a58b\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.832140 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrq46\" (UniqueName: \"kubernetes.io/projected/c37b521d-eed3-4bfc-895d-f8349240a58b-kube-api-access-qrq46\") pod \"c37b521d-eed3-4bfc-895d-f8349240a58b\" (UID: \"c37b521d-eed3-4bfc-895d-f8349240a58b\") " Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.851309 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37b521d-eed3-4bfc-895d-f8349240a58b-kube-api-access-qrq46" (OuterVolumeSpecName: "kube-api-access-qrq46") pod "c37b521d-eed3-4bfc-895d-f8349240a58b" (UID: "c37b521d-eed3-4bfc-895d-f8349240a58b"). InnerVolumeSpecName "kube-api-access-qrq46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.929046 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c37b521d-eed3-4bfc-895d-f8349240a58b" (UID: "c37b521d-eed3-4bfc-895d-f8349240a58b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.934507 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.934552 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrq46\" (UniqueName: \"kubernetes.io/projected/c37b521d-eed3-4bfc-895d-f8349240a58b-kube-api-access-qrq46\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:12 crc kubenswrapper[4825]: I0122 15:58:12.937650 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-inventory" (OuterVolumeSpecName: "inventory") pod "c37b521d-eed3-4bfc-895d-f8349240a58b" (UID: "c37b521d-eed3-4bfc-895d-f8349240a58b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.037128 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c37b521d-eed3-4bfc-895d-f8349240a58b-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.192893 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" event={"ID":"c37b521d-eed3-4bfc-895d-f8349240a58b","Type":"ContainerDied","Data":"554af2f85dc234fc1d4bc023d22bff64c8aaffc3a5728c76b1336f6595e72b75"} Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.192937 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="554af2f85dc234fc1d4bc023d22bff64c8aaffc3a5728c76b1336f6595e72b75" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.193255 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.288877 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk"] Jan 22 15:58:13 crc kubenswrapper[4825]: E0122 15:58:13.289519 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37b521d-eed3-4bfc-895d-f8349240a58b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.289548 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37b521d-eed3-4bfc-895d-f8349240a58b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.289920 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37b521d-eed3-4bfc-895d-f8349240a58b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.291099 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.293373 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.293552 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.293817 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.311253 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk"] Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.312795 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.342994 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzl7z\" (UniqueName: \"kubernetes.io/projected/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-kube-api-access-nzl7z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.343051 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.343545 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.445914 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzl7z\" (UniqueName: \"kubernetes.io/projected/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-kube-api-access-nzl7z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.446064 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.446388 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.449765 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.450430 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.460684 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzl7z\" (UniqueName: \"kubernetes.io/projected/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-kube-api-access-nzl7z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:13 crc kubenswrapper[4825]: I0122 15:58:13.628641 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:14 crc kubenswrapper[4825]: I0122 15:58:14.207830 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk"] Jan 22 15:58:14 crc kubenswrapper[4825]: I0122 15:58:14.790857 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:58:15 crc kubenswrapper[4825]: I0122 15:58:15.218568 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" event={"ID":"0b5da8d0-15a3-4f88-8aa3-57f5fa886633","Type":"ContainerStarted","Data":"40231e85dccb0b0df2f7362e650e4766b36478eb483fedab0843d16ab1379552"} Jan 22 15:58:15 crc kubenswrapper[4825]: I0122 15:58:15.218625 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" event={"ID":"0b5da8d0-15a3-4f88-8aa3-57f5fa886633","Type":"ContainerStarted","Data":"39b843d1088655222d015d13260a822b9a524bcc4493a53d1cf7146d5b2c4456"} Jan 22 15:58:20 crc kubenswrapper[4825]: I0122 15:58:20.311294 4825 generic.go:334] "Generic (PLEG): container finished" podID="0b5da8d0-15a3-4f88-8aa3-57f5fa886633" containerID="40231e85dccb0b0df2f7362e650e4766b36478eb483fedab0843d16ab1379552" exitCode=0 Jan 22 15:58:20 crc kubenswrapper[4825]: I0122 15:58:20.311919 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" event={"ID":"0b5da8d0-15a3-4f88-8aa3-57f5fa886633","Type":"ContainerDied","Data":"40231e85dccb0b0df2f7362e650e4766b36478eb483fedab0843d16ab1379552"} Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.222850 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.304806 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzl7z\" (UniqueName: \"kubernetes.io/projected/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-kube-api-access-nzl7z\") pod \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.305030 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-ssh-key-openstack-edpm-ipam\") pod \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.305145 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-inventory\") pod \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\" (UID: \"0b5da8d0-15a3-4f88-8aa3-57f5fa886633\") " Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.311934 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-kube-api-access-nzl7z" (OuterVolumeSpecName: "kube-api-access-nzl7z") pod "0b5da8d0-15a3-4f88-8aa3-57f5fa886633" (UID: "0b5da8d0-15a3-4f88-8aa3-57f5fa886633"). InnerVolumeSpecName "kube-api-access-nzl7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.344599 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" event={"ID":"0b5da8d0-15a3-4f88-8aa3-57f5fa886633","Type":"ContainerDied","Data":"39b843d1088655222d015d13260a822b9a524bcc4493a53d1cf7146d5b2c4456"} Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.344661 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b843d1088655222d015d13260a822b9a524bcc4493a53d1cf7146d5b2c4456" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.344926 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.346622 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b5da8d0-15a3-4f88-8aa3-57f5fa886633" (UID: "0b5da8d0-15a3-4f88-8aa3-57f5fa886633"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.380260 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-inventory" (OuterVolumeSpecName: "inventory") pod "0b5da8d0-15a3-4f88-8aa3-57f5fa886633" (UID: "0b5da8d0-15a3-4f88-8aa3-57f5fa886633"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.407904 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.407940 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzl7z\" (UniqueName: \"kubernetes.io/projected/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-kube-api-access-nzl7z\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.407953 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b5da8d0-15a3-4f88-8aa3-57f5fa886633-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.414645 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r"] Jan 22 15:58:22 crc kubenswrapper[4825]: E0122 15:58:22.415121 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5da8d0-15a3-4f88-8aa3-57f5fa886633" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.415137 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5da8d0-15a3-4f88-8aa3-57f5fa886633" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.415421 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5da8d0-15a3-4f88-8aa3-57f5fa886633" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.416320 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.425256 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r"] Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.509865 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rdn\" (UniqueName: \"kubernetes.io/projected/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-kube-api-access-28rdn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.510117 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.510736 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.612560 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.612706 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.612786 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rdn\" (UniqueName: \"kubernetes.io/projected/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-kube-api-access-28rdn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.618183 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.631606 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.634913 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rdn\" (UniqueName: \"kubernetes.io/projected/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-kube-api-access-28rdn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4544r\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: I0122 15:58:22.782952 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:58:22 crc kubenswrapper[4825]: E0122 15:58:22.803624 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b5da8d0_15a3_4f88_8aa3_57f5fa886633.slice\": RecentStats: unable to find data in memory cache]" Jan 22 15:58:23 crc kubenswrapper[4825]: I0122 15:58:23.411419 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r"] Jan 22 15:58:24 crc kubenswrapper[4825]: I0122 15:58:24.047718 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pt4fj"] Jan 22 15:58:24 crc kubenswrapper[4825]: I0122 15:58:24.057042 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pt4fj"] Jan 22 15:58:24 crc kubenswrapper[4825]: I0122 15:58:24.408805 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" event={"ID":"d5f55dc4-ff3c-456c-8d34-b5143b856f0a","Type":"ContainerStarted","Data":"17fb555170649a03b099a6bfb5bad307a49f65336146b6f8204a62439c2d695f"} Jan 22 15:58:24 crc kubenswrapper[4825]: I0122 15:58:24.408860 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" event={"ID":"d5f55dc4-ff3c-456c-8d34-b5143b856f0a","Type":"ContainerStarted","Data":"a58b9e5b4f4c2cb8afef69ffd302dd3d93bf29c15e29050a0de36d06c18f0932"} Jan 22 15:58:24 crc kubenswrapper[4825]: I0122 15:58:24.430366 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" podStartSLOduration=2.006447395 podStartE2EDuration="2.430336805s" podCreationTimestamp="2026-01-22 15:58:22 +0000 UTC" firstStartedPulling="2026-01-22 15:58:23.412265647 +0000 UTC m=+2050.173792557" lastFinishedPulling="2026-01-22 15:58:23.836155057 +0000 UTC m=+2050.597681967" observedRunningTime="2026-01-22 15:58:24.422337468 +0000 UTC m=+2051.183864408" watchObservedRunningTime="2026-01-22 15:58:24.430336805 +0000 UTC m=+2051.191863705" Jan 22 15:58:25 crc kubenswrapper[4825]: I0122 15:58:25.534050 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020cd9b5-8960-4a30-8322-c1de670f2f10" path="/var/lib/kubelet/pods/020cd9b5-8960-4a30-8322-c1de670f2f10/volumes" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.207207 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fk2b9"] Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.210783 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.219301 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fk2b9"] Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.283588 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5pm\" (UniqueName: \"kubernetes.io/projected/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-kube-api-access-6v5pm\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.283718 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-utilities\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.283817 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-catalog-content\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.386393 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-catalog-content\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.386539 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v5pm\" (UniqueName: \"kubernetes.io/projected/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-kube-api-access-6v5pm\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.386687 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-utilities\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.387007 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-catalog-content\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.387284 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-utilities\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.406878 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v5pm\" (UniqueName: \"kubernetes.io/projected/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-kube-api-access-6v5pm\") pod \"certified-operators-fk2b9\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:33 crc kubenswrapper[4825]: I0122 15:58:33.530649 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:34 crc kubenswrapper[4825]: I0122 15:58:34.119897 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fk2b9"] Jan 22 15:58:34 crc kubenswrapper[4825]: I0122 15:58:34.512421 4825 generic.go:334] "Generic (PLEG): container finished" podID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerID="6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202" exitCode=0 Jan 22 15:58:34 crc kubenswrapper[4825]: I0122 15:58:34.512473 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerDied","Data":"6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202"} Jan 22 15:58:34 crc kubenswrapper[4825]: I0122 15:58:34.512501 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerStarted","Data":"968a04efd3674270ac159a896ba0cca6025f174f2ccd8268c212e9accfeb4384"} Jan 22 15:58:35 crc kubenswrapper[4825]: I0122 15:58:35.528461 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerStarted","Data":"3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2"} Jan 22 15:58:37 crc kubenswrapper[4825]: I0122 15:58:37.564691 4825 generic.go:334] "Generic (PLEG): container finished" podID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerID="3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2" exitCode=0 Jan 22 15:58:37 crc kubenswrapper[4825]: I0122 15:58:37.564778 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerDied","Data":"3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2"} Jan 22 15:58:38 crc kubenswrapper[4825]: I0122 15:58:38.577076 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerStarted","Data":"b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187"} Jan 22 15:58:38 crc kubenswrapper[4825]: I0122 15:58:38.605765 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fk2b9" podStartSLOduration=2.121125871 podStartE2EDuration="5.605739589s" podCreationTimestamp="2026-01-22 15:58:33 +0000 UTC" firstStartedPulling="2026-01-22 15:58:34.515260601 +0000 UTC m=+2061.276787511" lastFinishedPulling="2026-01-22 15:58:37.999874319 +0000 UTC m=+2064.761401229" observedRunningTime="2026-01-22 15:58:38.598783141 +0000 UTC m=+2065.360310051" watchObservedRunningTime="2026-01-22 15:58:38.605739589 +0000 UTC m=+2065.367266509" Jan 22 15:58:43 crc kubenswrapper[4825]: I0122 15:58:43.536211 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:43 crc kubenswrapper[4825]: I0122 15:58:43.536888 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:43 crc kubenswrapper[4825]: I0122 15:58:43.618090 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:43 crc kubenswrapper[4825]: I0122 15:58:43.681830 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:43 crc kubenswrapper[4825]: I0122 15:58:43.870856 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fk2b9"] Jan 22 15:58:44 crc kubenswrapper[4825]: I0122 15:58:44.463466 4825 scope.go:117] "RemoveContainer" containerID="f8d85efd37ea8dc7ff0830b2241a6253c94fbd1de81fee3b3aff4db16b0a1662" Jan 22 15:58:44 crc kubenswrapper[4825]: I0122 15:58:44.529158 4825 scope.go:117] "RemoveContainer" containerID="4f70e69e07fe0c12bcb0cbc744d21b186f94643f57ca9ca3a58f70735bcdc862" Jan 22 15:58:45 crc kubenswrapper[4825]: I0122 15:58:45.657903 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fk2b9" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="registry-server" containerID="cri-o://b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187" gracePeriod=2 Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.241736 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.392797 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-catalog-content\") pod \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.393266 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v5pm\" (UniqueName: \"kubernetes.io/projected/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-kube-api-access-6v5pm\") pod \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.393353 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-utilities\") pod \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\" (UID: \"976ff67e-056b-4e27-a2e8-3e86ff33e5b4\") " Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.394225 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-utilities" (OuterVolumeSpecName: "utilities") pod "976ff67e-056b-4e27-a2e8-3e86ff33e5b4" (UID: "976ff67e-056b-4e27-a2e8-3e86ff33e5b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.398957 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-kube-api-access-6v5pm" (OuterVolumeSpecName: "kube-api-access-6v5pm") pod "976ff67e-056b-4e27-a2e8-3e86ff33e5b4" (UID: "976ff67e-056b-4e27-a2e8-3e86ff33e5b4"). InnerVolumeSpecName "kube-api-access-6v5pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.450457 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "976ff67e-056b-4e27-a2e8-3e86ff33e5b4" (UID: "976ff67e-056b-4e27-a2e8-3e86ff33e5b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.496385 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v5pm\" (UniqueName: \"kubernetes.io/projected/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-kube-api-access-6v5pm\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.496650 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.496747 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/976ff67e-056b-4e27-a2e8-3e86ff33e5b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.672687 4825 generic.go:334] "Generic (PLEG): container finished" podID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerID="b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187" exitCode=0 Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.672742 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerDied","Data":"b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187"} Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.672776 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2b9" event={"ID":"976ff67e-056b-4e27-a2e8-3e86ff33e5b4","Type":"ContainerDied","Data":"968a04efd3674270ac159a896ba0cca6025f174f2ccd8268c212e9accfeb4384"} Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.672796 4825 scope.go:117] "RemoveContainer" containerID="b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.673006 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2b9" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.709596 4825 scope.go:117] "RemoveContainer" containerID="3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.735199 4825 scope.go:117] "RemoveContainer" containerID="6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.738476 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fk2b9"] Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.750339 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fk2b9"] Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.780173 4825 scope.go:117] "RemoveContainer" containerID="b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187" Jan 22 15:58:46 crc kubenswrapper[4825]: E0122 15:58:46.780644 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187\": container with ID starting with b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187 not found: ID does not exist" containerID="b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.780689 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187"} err="failed to get container status \"b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187\": rpc error: code = NotFound desc = could not find container \"b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187\": container with ID starting with b9b7fcab3f0b680be4b0c67d91691e1e35c66774927c038a72395ee555ba5187 not found: ID does not exist" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.780716 4825 scope.go:117] "RemoveContainer" containerID="3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2" Jan 22 15:58:46 crc kubenswrapper[4825]: E0122 15:58:46.781303 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2\": container with ID starting with 3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2 not found: ID does not exist" containerID="3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.781338 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2"} err="failed to get container status \"3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2\": rpc error: code = NotFound desc = could not find container \"3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2\": container with ID starting with 3a12a79f0a760dc3f294f9385e3415c1422929d8abe053ca1b1d6bba416469f2 not found: ID does not exist" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.781377 4825 scope.go:117] "RemoveContainer" containerID="6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202" Jan 22 15:58:46 crc kubenswrapper[4825]: E0122 15:58:46.781736 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202\": container with ID starting with 6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202 not found: ID does not exist" containerID="6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202" Jan 22 15:58:46 crc kubenswrapper[4825]: I0122 15:58:46.781761 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202"} err="failed to get container status \"6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202\": rpc error: code = NotFound desc = could not find container \"6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202\": container with ID starting with 6ca63af3e36a3a37eaa0ce0ef19b0155a3493cf893d3335e257fd17b99f49202 not found: ID does not exist" Jan 22 15:58:47 crc kubenswrapper[4825]: I0122 15:58:47.532623 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" path="/var/lib/kubelet/pods/976ff67e-056b-4e27-a2e8-3e86ff33e5b4/volumes" Jan 22 15:59:02 crc kubenswrapper[4825]: I0122 15:59:02.865401 4825 generic.go:334] "Generic (PLEG): container finished" podID="d5f55dc4-ff3c-456c-8d34-b5143b856f0a" containerID="17fb555170649a03b099a6bfb5bad307a49f65336146b6f8204a62439c2d695f" exitCode=0 Jan 22 15:59:02 crc kubenswrapper[4825]: I0122 15:59:02.865891 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" event={"ID":"d5f55dc4-ff3c-456c-8d34-b5143b856f0a","Type":"ContainerDied","Data":"17fb555170649a03b099a6bfb5bad307a49f65336146b6f8204a62439c2d695f"} Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.413559 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmbvb"] Jan 22 15:59:04 crc kubenswrapper[4825]: E0122 15:59:04.420548 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="extract-utilities" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.420586 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="extract-utilities" Jan 22 15:59:04 crc kubenswrapper[4825]: E0122 15:59:04.420613 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="extract-content" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.420621 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="extract-content" Jan 22 15:59:04 crc kubenswrapper[4825]: E0122 15:59:04.420635 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="registry-server" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.420640 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="registry-server" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.421357 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="976ff67e-056b-4e27-a2e8-3e86ff33e5b4" containerName="registry-server" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.431008 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.439310 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-utilities\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.439414 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-catalog-content\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.439456 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jm4p\" (UniqueName: \"kubernetes.io/projected/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-kube-api-access-4jm4p\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.474765 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmbvb"] Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.520712 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.545558 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-utilities\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.545642 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-catalog-content\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.545674 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jm4p\" (UniqueName: \"kubernetes.io/projected/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-kube-api-access-4jm4p\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.546744 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-utilities\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.547582 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-catalog-content\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.577221 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jm4p\" (UniqueName: \"kubernetes.io/projected/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-kube-api-access-4jm4p\") pod \"redhat-operators-rmbvb\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.662267 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-inventory\") pod \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.662353 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-ssh-key-openstack-edpm-ipam\") pod \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.662443 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28rdn\" (UniqueName: \"kubernetes.io/projected/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-kube-api-access-28rdn\") pod \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\" (UID: \"d5f55dc4-ff3c-456c-8d34-b5143b856f0a\") " Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.674376 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-kube-api-access-28rdn" (OuterVolumeSpecName: "kube-api-access-28rdn") pod "d5f55dc4-ff3c-456c-8d34-b5143b856f0a" (UID: "d5f55dc4-ff3c-456c-8d34-b5143b856f0a"). InnerVolumeSpecName "kube-api-access-28rdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.693547 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-inventory" (OuterVolumeSpecName: "inventory") pod "d5f55dc4-ff3c-456c-8d34-b5143b856f0a" (UID: "d5f55dc4-ff3c-456c-8d34-b5143b856f0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.718905 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d5f55dc4-ff3c-456c-8d34-b5143b856f0a" (UID: "d5f55dc4-ff3c-456c-8d34-b5143b856f0a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.764305 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.764342 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.764353 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28rdn\" (UniqueName: \"kubernetes.io/projected/d5f55dc4-ff3c-456c-8d34-b5143b856f0a-kube-api-access-28rdn\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.846101 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.890046 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" event={"ID":"d5f55dc4-ff3c-456c-8d34-b5143b856f0a","Type":"ContainerDied","Data":"a58b9e5b4f4c2cb8afef69ffd302dd3d93bf29c15e29050a0de36d06c18f0932"} Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.890099 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58b9e5b4f4c2cb8afef69ffd302dd3d93bf29c15e29050a0de36d06c18f0932" Jan 22 15:59:04 crc kubenswrapper[4825]: I0122 15:59:04.890177 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4544r" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.096168 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52"] Jan 22 15:59:05 crc kubenswrapper[4825]: E0122 15:59:05.096843 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f55dc4-ff3c-456c-8d34-b5143b856f0a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.096869 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f55dc4-ff3c-456c-8d34-b5143b856f0a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.097173 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f55dc4-ff3c-456c-8d34-b5143b856f0a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.098248 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.103745 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.104070 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.104131 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.104309 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.108461 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52"] Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.172833 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.173131 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4dw\" (UniqueName: \"kubernetes.io/projected/90cd4aa4-003a-423a-a15b-1f0321375a34-kube-api-access-9f4dw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.173198 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.275041 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.275381 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.275444 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4dw\" (UniqueName: \"kubernetes.io/projected/90cd4aa4-003a-423a-a15b-1f0321375a34-kube-api-access-9f4dw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.282391 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.283208 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.297554 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4dw\" (UniqueName: \"kubernetes.io/projected/90cd4aa4-003a-423a-a15b-1f0321375a34-kube-api-access-9f4dw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m2b52\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.430510 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.439514 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmbvb"] Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.553631 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.553690 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.903301 4825 generic.go:334] "Generic (PLEG): container finished" podID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerID="13f6349a8f86cbd719ab6076b722e81e644f2d2da7bc2241949ef513f721348d" exitCode=0 Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.903483 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerDied","Data":"13f6349a8f86cbd719ab6076b722e81e644f2d2da7bc2241949ef513f721348d"} Jan 22 15:59:05 crc kubenswrapper[4825]: I0122 15:59:05.903646 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerStarted","Data":"998212c643e2981ceb82b685b49b5a7f895a1a532d75b48ddcb613224766be6b"} Jan 22 15:59:06 crc kubenswrapper[4825]: I0122 15:59:06.532669 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52"] Jan 22 15:59:06 crc kubenswrapper[4825]: I0122 15:59:06.919250 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" event={"ID":"90cd4aa4-003a-423a-a15b-1f0321375a34","Type":"ContainerStarted","Data":"316c65c83b4dfe3cf08c96dd519e7d0bdbc3d746fb2b273c67025e18d5c0980a"} Jan 22 15:59:07 crc kubenswrapper[4825]: I0122 15:59:07.930865 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerStarted","Data":"b5bb438688d644068d6b1cbf0fa31dd2d1f408c201bdd24833d2a0d9812e3a63"} Jan 22 15:59:07 crc kubenswrapper[4825]: I0122 15:59:07.932825 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" event={"ID":"90cd4aa4-003a-423a-a15b-1f0321375a34","Type":"ContainerStarted","Data":"fef146c579613a2c38a9b532329d9a7807ca1b95e99f1983d6d515b7d0c4d640"} Jan 22 15:59:07 crc kubenswrapper[4825]: I0122 15:59:07.973563 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" podStartSLOduration=2.5720429190000003 podStartE2EDuration="2.973539703s" podCreationTimestamp="2026-01-22 15:59:05 +0000 UTC" firstStartedPulling="2026-01-22 15:59:06.544397399 +0000 UTC m=+2093.305924309" lastFinishedPulling="2026-01-22 15:59:06.945894183 +0000 UTC m=+2093.707421093" observedRunningTime="2026-01-22 15:59:07.968841269 +0000 UTC m=+2094.730368189" watchObservedRunningTime="2026-01-22 15:59:07.973539703 +0000 UTC m=+2094.735066613" Jan 22 15:59:13 crc kubenswrapper[4825]: I0122 15:59:13.096083 4825 generic.go:334] "Generic (PLEG): container finished" podID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerID="b5bb438688d644068d6b1cbf0fa31dd2d1f408c201bdd24833d2a0d9812e3a63" exitCode=0 Jan 22 15:59:13 crc kubenswrapper[4825]: I0122 15:59:13.096658 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerDied","Data":"b5bb438688d644068d6b1cbf0fa31dd2d1f408c201bdd24833d2a0d9812e3a63"} Jan 22 15:59:14 crc kubenswrapper[4825]: I0122 15:59:14.110207 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerStarted","Data":"03e7f51e80f4670ada2b030d53972ea106c3bfa7412773428cc26c17076d1d7b"} Jan 22 15:59:14 crc kubenswrapper[4825]: I0122 15:59:14.142471 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmbvb" podStartSLOduration=2.350862757 podStartE2EDuration="10.142450047s" podCreationTimestamp="2026-01-22 15:59:04 +0000 UTC" firstStartedPulling="2026-01-22 15:59:05.915857985 +0000 UTC m=+2092.677384905" lastFinishedPulling="2026-01-22 15:59:13.707445285 +0000 UTC m=+2100.468972195" observedRunningTime="2026-01-22 15:59:14.131835735 +0000 UTC m=+2100.893362645" watchObservedRunningTime="2026-01-22 15:59:14.142450047 +0000 UTC m=+2100.903976957" Jan 22 15:59:14 crc kubenswrapper[4825]: I0122 15:59:14.846371 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:14 crc kubenswrapper[4825]: I0122 15:59:14.846730 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:15 crc kubenswrapper[4825]: I0122 15:59:15.894679 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmbvb" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="registry-server" probeResult="failure" output=< Jan 22 15:59:15 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 15:59:15 crc kubenswrapper[4825]: > Jan 22 15:59:18 crc kubenswrapper[4825]: I0122 15:59:18.053101 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-f48qr"] Jan 22 15:59:18 crc kubenswrapper[4825]: I0122 15:59:18.064567 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-f48qr"] Jan 22 15:59:19 crc kubenswrapper[4825]: I0122 15:59:19.564156 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8906ca3b-553a-4473-8b78-ab2de85f25a8" path="/var/lib/kubelet/pods/8906ca3b-553a-4473-8b78-ab2de85f25a8/volumes" Jan 22 15:59:21 crc kubenswrapper[4825]: I0122 15:59:21.865535 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9qzbv"] Jan 22 15:59:21 crc kubenswrapper[4825]: I0122 15:59:21.871900 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:21 crc kubenswrapper[4825]: I0122 15:59:21.890484 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qzbv"] Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.059247 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-utilities\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.060018 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-catalog-content\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.060425 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbfv\" (UniqueName: \"kubernetes.io/projected/46d844f1-5ad4-43f7-a6d4-2793181f982d-kube-api-access-scbfv\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.162898 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-catalog-content\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.163086 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scbfv\" (UniqueName: \"kubernetes.io/projected/46d844f1-5ad4-43f7-a6d4-2793181f982d-kube-api-access-scbfv\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.163129 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-utilities\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.164060 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-catalog-content\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.164100 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-utilities\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.197715 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbfv\" (UniqueName: \"kubernetes.io/projected/46d844f1-5ad4-43f7-a6d4-2793181f982d-kube-api-access-scbfv\") pod \"community-operators-9qzbv\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:22 crc kubenswrapper[4825]: I0122 15:59:22.203160 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:23 crc kubenswrapper[4825]: I0122 15:59:23.073068 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qzbv"] Jan 22 15:59:23 crc kubenswrapper[4825]: I0122 15:59:23.351847 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerStarted","Data":"2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178"} Jan 22 15:59:23 crc kubenswrapper[4825]: I0122 15:59:23.352542 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerStarted","Data":"21f367a63c1bdc939a1f6aaa261d18e1ec555d35a6dad7a3b4c2585c01aa0d04"} Jan 22 15:59:24 crc kubenswrapper[4825]: I0122 15:59:24.364038 4825 generic.go:334] "Generic (PLEG): container finished" podID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerID="2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178" exitCode=0 Jan 22 15:59:24 crc kubenswrapper[4825]: I0122 15:59:24.364118 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerDied","Data":"2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178"} Jan 22 15:59:24 crc kubenswrapper[4825]: I0122 15:59:24.913623 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:24 crc kubenswrapper[4825]: I0122 15:59:24.984711 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:25 crc kubenswrapper[4825]: I0122 15:59:25.197999 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-bsgbr"] Jan 22 15:59:25 crc kubenswrapper[4825]: I0122 15:59:25.241941 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-bsgbr"] Jan 22 15:59:25 crc kubenswrapper[4825]: I0122 15:59:25.529002 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d75816f7-ded7-47ef-be7f-f471a696cde4" path="/var/lib/kubelet/pods/d75816f7-ded7-47ef-be7f-f471a696cde4/volumes" Jan 22 15:59:26 crc kubenswrapper[4825]: I0122 15:59:26.385097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerStarted","Data":"1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e"} Jan 22 15:59:27 crc kubenswrapper[4825]: I0122 15:59:27.229184 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmbvb"] Jan 22 15:59:27 crc kubenswrapper[4825]: I0122 15:59:27.229823 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmbvb" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="registry-server" containerID="cri-o://03e7f51e80f4670ada2b030d53972ea106c3bfa7412773428cc26c17076d1d7b" gracePeriod=2 Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.408574 4825 generic.go:334] "Generic (PLEG): container finished" podID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerID="1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e" exitCode=0 Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.408706 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerDied","Data":"1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e"} Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.412136 4825 generic.go:334] "Generic (PLEG): container finished" podID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerID="03e7f51e80f4670ada2b030d53972ea106c3bfa7412773428cc26c17076d1d7b" exitCode=0 Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.412168 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerDied","Data":"03e7f51e80f4670ada2b030d53972ea106c3bfa7412773428cc26c17076d1d7b"} Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.412189 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmbvb" event={"ID":"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096","Type":"ContainerDied","Data":"998212c643e2981ceb82b685b49b5a7f895a1a532d75b48ddcb613224766be6b"} Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.412201 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998212c643e2981ceb82b685b49b5a7f895a1a532d75b48ddcb613224766be6b" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.428566 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.623297 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-catalog-content\") pod \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.623737 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-utilities\") pod \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.623848 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jm4p\" (UniqueName: \"kubernetes.io/projected/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-kube-api-access-4jm4p\") pod \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\" (UID: \"8c66a6d9-16f5-4dc5-8287-1cde3fbc3096\") " Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.627618 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-utilities" (OuterVolumeSpecName: "utilities") pod "8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" (UID: "8c66a6d9-16f5-4dc5-8287-1cde3fbc3096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.639436 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-kube-api-access-4jm4p" (OuterVolumeSpecName: "kube-api-access-4jm4p") pod "8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" (UID: "8c66a6d9-16f5-4dc5-8287-1cde3fbc3096"). InnerVolumeSpecName "kube-api-access-4jm4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.726931 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.726972 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jm4p\" (UniqueName: \"kubernetes.io/projected/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-kube-api-access-4jm4p\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.748071 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" (UID: "8c66a6d9-16f5-4dc5-8287-1cde3fbc3096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:59:28 crc kubenswrapper[4825]: I0122 15:59:28.829359 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:29 crc kubenswrapper[4825]: I0122 15:59:29.774858 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmbvb" Jan 22 15:59:29 crc kubenswrapper[4825]: I0122 15:59:29.827003 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmbvb"] Jan 22 15:59:29 crc kubenswrapper[4825]: I0122 15:59:29.846967 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmbvb"] Jan 22 15:59:30 crc kubenswrapper[4825]: I0122 15:59:30.787663 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerStarted","Data":"b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8"} Jan 22 15:59:30 crc kubenswrapper[4825]: I0122 15:59:30.822090 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9qzbv" podStartSLOduration=5.147852772 podStartE2EDuration="9.822048649s" podCreationTimestamp="2026-01-22 15:59:21 +0000 UTC" firstStartedPulling="2026-01-22 15:59:24.36669425 +0000 UTC m=+2111.128221150" lastFinishedPulling="2026-01-22 15:59:29.040890117 +0000 UTC m=+2115.802417027" observedRunningTime="2026-01-22 15:59:30.812794127 +0000 UTC m=+2117.574321037" watchObservedRunningTime="2026-01-22 15:59:30.822048649 +0000 UTC m=+2117.583575559" Jan 22 15:59:31 crc kubenswrapper[4825]: I0122 15:59:31.533196 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" path="/var/lib/kubelet/pods/8c66a6d9-16f5-4dc5-8287-1cde3fbc3096/volumes" Jan 22 15:59:32 crc kubenswrapper[4825]: I0122 15:59:32.204030 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:32 crc kubenswrapper[4825]: I0122 15:59:32.204269 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:32 crc kubenswrapper[4825]: I0122 15:59:32.255916 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:35 crc kubenswrapper[4825]: I0122 15:59:35.541472 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 15:59:35 crc kubenswrapper[4825]: I0122 15:59:35.541918 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 15:59:42 crc kubenswrapper[4825]: I0122 15:59:42.272073 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:42 crc kubenswrapper[4825]: I0122 15:59:42.329351 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qzbv"] Jan 22 15:59:42 crc kubenswrapper[4825]: I0122 15:59:42.982757 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9qzbv" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="registry-server" containerID="cri-o://b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8" gracePeriod=2 Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.616852 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.790584 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-utilities\") pod \"46d844f1-5ad4-43f7-a6d4-2793181f982d\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.791205 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-catalog-content\") pod \"46d844f1-5ad4-43f7-a6d4-2793181f982d\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.791420 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scbfv\" (UniqueName: \"kubernetes.io/projected/46d844f1-5ad4-43f7-a6d4-2793181f982d-kube-api-access-scbfv\") pod \"46d844f1-5ad4-43f7-a6d4-2793181f982d\" (UID: \"46d844f1-5ad4-43f7-a6d4-2793181f982d\") " Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.791730 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-utilities" (OuterVolumeSpecName: "utilities") pod "46d844f1-5ad4-43f7-a6d4-2793181f982d" (UID: "46d844f1-5ad4-43f7-a6d4-2793181f982d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.792224 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.800221 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d844f1-5ad4-43f7-a6d4-2793181f982d-kube-api-access-scbfv" (OuterVolumeSpecName: "kube-api-access-scbfv") pod "46d844f1-5ad4-43f7-a6d4-2793181f982d" (UID: "46d844f1-5ad4-43f7-a6d4-2793181f982d"). InnerVolumeSpecName "kube-api-access-scbfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.860206 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46d844f1-5ad4-43f7-a6d4-2793181f982d" (UID: "46d844f1-5ad4-43f7-a6d4-2793181f982d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.894934 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46d844f1-5ad4-43f7-a6d4-2793181f982d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.894972 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scbfv\" (UniqueName: \"kubernetes.io/projected/46d844f1-5ad4-43f7-a6d4-2793181f982d-kube-api-access-scbfv\") on node \"crc\" DevicePath \"\"" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.995401 4825 generic.go:334] "Generic (PLEG): container finished" podID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerID="b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8" exitCode=0 Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.995472 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qzbv" Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.995486 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerDied","Data":"b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8"} Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.995540 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qzbv" event={"ID":"46d844f1-5ad4-43f7-a6d4-2793181f982d","Type":"ContainerDied","Data":"21f367a63c1bdc939a1f6aaa261d18e1ec555d35a6dad7a3b4c2585c01aa0d04"} Jan 22 15:59:43 crc kubenswrapper[4825]: I0122 15:59:43.995570 4825 scope.go:117] "RemoveContainer" containerID="b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.026862 4825 scope.go:117] "RemoveContainer" containerID="1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.041583 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qzbv"] Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.050492 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9qzbv"] Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.080725 4825 scope.go:117] "RemoveContainer" containerID="2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.106703 4825 scope.go:117] "RemoveContainer" containerID="b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8" Jan 22 15:59:44 crc kubenswrapper[4825]: E0122 15:59:44.107267 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8\": container with ID starting with b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8 not found: ID does not exist" containerID="b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.107298 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8"} err="failed to get container status \"b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8\": rpc error: code = NotFound desc = could not find container \"b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8\": container with ID starting with b2601a664db7eaa3b33bb5f93ffc21d365df198682c5f5fc30791c30e22e28f8 not found: ID does not exist" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.107319 4825 scope.go:117] "RemoveContainer" containerID="1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e" Jan 22 15:59:44 crc kubenswrapper[4825]: E0122 15:59:44.107813 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e\": container with ID starting with 1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e not found: ID does not exist" containerID="1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.107841 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e"} err="failed to get container status \"1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e\": rpc error: code = NotFound desc = could not find container \"1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e\": container with ID starting with 1023139d58fa952e56081eb8cc4080769337c30c2e1a201217680cb28b1c0a4e not found: ID does not exist" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.107861 4825 scope.go:117] "RemoveContainer" containerID="2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178" Jan 22 15:59:44 crc kubenswrapper[4825]: E0122 15:59:44.108388 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178\": container with ID starting with 2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178 not found: ID does not exist" containerID="2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.108479 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178"} err="failed to get container status \"2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178\": rpc error: code = NotFound desc = could not find container \"2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178\": container with ID starting with 2f45e4bd31ff049b327ff7b0f6d8c0f68cffb7bdfdf6773955fa387fa65bd178 not found: ID does not exist" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.640178 4825 scope.go:117] "RemoveContainer" containerID="8e5bddf822a2395fb61690169e8fc9ed8285244f723264be49fed4d886e79c50" Jan 22 15:59:44 crc kubenswrapper[4825]: I0122 15:59:44.675047 4825 scope.go:117] "RemoveContainer" containerID="63224e430271b319874dcc2c67bbb89dada81e109fe23cda6c832e19e6e3704f" Jan 22 15:59:45 crc kubenswrapper[4825]: I0122 15:59:45.529824 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" path="/var/lib/kubelet/pods/46d844f1-5ad4-43f7-a6d4-2793181f982d/volumes" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.164766 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86"] Jan 22 16:00:00 crc kubenswrapper[4825]: E0122 16:00:00.165825 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="extract-utilities" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.165849 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="extract-utilities" Jan 22 16:00:00 crc kubenswrapper[4825]: E0122 16:00:00.165871 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="extract-utilities" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.165878 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="extract-utilities" Jan 22 16:00:00 crc kubenswrapper[4825]: E0122 16:00:00.165903 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="registry-server" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.165911 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="registry-server" Jan 22 16:00:00 crc kubenswrapper[4825]: E0122 16:00:00.165934 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="extract-content" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.165941 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="extract-content" Jan 22 16:00:00 crc kubenswrapper[4825]: E0122 16:00:00.165962 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="extract-content" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.165969 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="extract-content" Jan 22 16:00:00 crc kubenswrapper[4825]: E0122 16:00:00.166006 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="registry-server" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.166013 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="registry-server" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.166274 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c66a6d9-16f5-4dc5-8287-1cde3fbc3096" containerName="registry-server" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.166296 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d844f1-5ad4-43f7-a6d4-2793181f982d" containerName="registry-server" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.167361 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.170779 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.170840 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.179596 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86"] Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.299413 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlh2z\" (UniqueName: \"kubernetes.io/projected/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-kube-api-access-mlh2z\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.299720 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-secret-volume\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.299812 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-config-volume\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.402239 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlh2z\" (UniqueName: \"kubernetes.io/projected/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-kube-api-access-mlh2z\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.402379 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-secret-volume\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.402410 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-config-volume\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.403334 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-config-volume\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.416684 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-secret-volume\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.425286 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlh2z\" (UniqueName: \"kubernetes.io/projected/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-kube-api-access-mlh2z\") pod \"collect-profiles-29484960-w9d86\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:00 crc kubenswrapper[4825]: I0122 16:00:00.501845 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:01 crc kubenswrapper[4825]: I0122 16:00:01.256822 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86"] Jan 22 16:00:01 crc kubenswrapper[4825]: I0122 16:00:01.407871 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" event={"ID":"d8932b91-b7cf-43d4-a324-1f6bb0271cfd","Type":"ContainerStarted","Data":"fa1102488b5a10088c9011025774a7cac339ee91e090ccf0098d13848fef7c3a"} Jan 22 16:00:02 crc kubenswrapper[4825]: I0122 16:00:02.420289 4825 generic.go:334] "Generic (PLEG): container finished" podID="d8932b91-b7cf-43d4-a324-1f6bb0271cfd" containerID="b5287116ddab99f2ba7db10d252495a4e7da968f46fc1d7de445e21c6592aa9f" exitCode=0 Jan 22 16:00:02 crc kubenswrapper[4825]: I0122 16:00:02.420445 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" event={"ID":"d8932b91-b7cf-43d4-a324-1f6bb0271cfd","Type":"ContainerDied","Data":"b5287116ddab99f2ba7db10d252495a4e7da968f46fc1d7de445e21c6592aa9f"} Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.869346 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.972744 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-config-volume\") pod \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.972920 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-secret-volume\") pod \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.973072 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlh2z\" (UniqueName: \"kubernetes.io/projected/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-kube-api-access-mlh2z\") pod \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\" (UID: \"d8932b91-b7cf-43d4-a324-1f6bb0271cfd\") " Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.974120 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8932b91-b7cf-43d4-a324-1f6bb0271cfd" (UID: "d8932b91-b7cf-43d4-a324-1f6bb0271cfd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.974783 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.980164 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8932b91-b7cf-43d4-a324-1f6bb0271cfd" (UID: "d8932b91-b7cf-43d4-a324-1f6bb0271cfd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:03 crc kubenswrapper[4825]: I0122 16:00:03.980229 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-kube-api-access-mlh2z" (OuterVolumeSpecName: "kube-api-access-mlh2z") pod "d8932b91-b7cf-43d4-a324-1f6bb0271cfd" (UID: "d8932b91-b7cf-43d4-a324-1f6bb0271cfd"). InnerVolumeSpecName "kube-api-access-mlh2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.077274 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.077329 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlh2z\" (UniqueName: \"kubernetes.io/projected/d8932b91-b7cf-43d4-a324-1f6bb0271cfd-kube-api-access-mlh2z\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.440381 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" event={"ID":"d8932b91-b7cf-43d4-a324-1f6bb0271cfd","Type":"ContainerDied","Data":"fa1102488b5a10088c9011025774a7cac339ee91e090ccf0098d13848fef7c3a"} Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.440424 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa1102488b5a10088c9011025774a7cac339ee91e090ccf0098d13848fef7c3a" Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.440441 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484960-w9d86" Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.960839 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz"] Jan 22 16:00:04 crc kubenswrapper[4825]: I0122 16:00:04.969328 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484915-hkqwz"] Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.453689 4825 generic.go:334] "Generic (PLEG): container finished" podID="90cd4aa4-003a-423a-a15b-1f0321375a34" containerID="fef146c579613a2c38a9b532329d9a7807ca1b95e99f1983d6d515b7d0c4d640" exitCode=0 Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.453744 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" event={"ID":"90cd4aa4-003a-423a-a15b-1f0321375a34","Type":"ContainerDied","Data":"fef146c579613a2c38a9b532329d9a7807ca1b95e99f1983d6d515b7d0c4d640"} Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.642645 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.643141 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.643266 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.644268 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.644345 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c" gracePeriod=600 Jan 22 16:00:05 crc kubenswrapper[4825]: I0122 16:00:05.673296 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caae48a6-c8ee-4c56-91cc-fe8f4b21e313" path="/var/lib/kubelet/pods/caae48a6-c8ee-4c56-91cc-fe8f4b21e313/volumes" Jan 22 16:00:05 crc kubenswrapper[4825]: E0122 16:00:05.936954 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d6015ae_d193_4854_9861_dc4384510fdb.slice/crio-5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d6015ae_d193_4854_9861_dc4384510fdb.slice/crio-conmon-5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:00:06 crc kubenswrapper[4825]: I0122 16:00:06.466045 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c" exitCode=0 Jan 22 16:00:06 crc kubenswrapper[4825]: I0122 16:00:06.466131 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c"} Jan 22 16:00:06 crc kubenswrapper[4825]: I0122 16:00:06.466342 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec"} Jan 22 16:00:06 crc kubenswrapper[4825]: I0122 16:00:06.466360 4825 scope.go:117] "RemoveContainer" containerID="88ede37ba45b0f261e0327961a0f8c6e3fb9b840a9d3fd11ddf5bc730f1fbd2d" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.015521 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.201366 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-inventory\") pod \"90cd4aa4-003a-423a-a15b-1f0321375a34\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.201525 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f4dw\" (UniqueName: \"kubernetes.io/projected/90cd4aa4-003a-423a-a15b-1f0321375a34-kube-api-access-9f4dw\") pod \"90cd4aa4-003a-423a-a15b-1f0321375a34\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.201651 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-ssh-key-openstack-edpm-ipam\") pod \"90cd4aa4-003a-423a-a15b-1f0321375a34\" (UID: \"90cd4aa4-003a-423a-a15b-1f0321375a34\") " Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.208102 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90cd4aa4-003a-423a-a15b-1f0321375a34-kube-api-access-9f4dw" (OuterVolumeSpecName: "kube-api-access-9f4dw") pod "90cd4aa4-003a-423a-a15b-1f0321375a34" (UID: "90cd4aa4-003a-423a-a15b-1f0321375a34"). InnerVolumeSpecName "kube-api-access-9f4dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.240642 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90cd4aa4-003a-423a-a15b-1f0321375a34" (UID: "90cd4aa4-003a-423a-a15b-1f0321375a34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.241526 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-inventory" (OuterVolumeSpecName: "inventory") pod "90cd4aa4-003a-423a-a15b-1f0321375a34" (UID: "90cd4aa4-003a-423a-a15b-1f0321375a34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.304751 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f4dw\" (UniqueName: \"kubernetes.io/projected/90cd4aa4-003a-423a-a15b-1f0321375a34-kube-api-access-9f4dw\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.304798 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.304814 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cd4aa4-003a-423a-a15b-1f0321375a34-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.495435 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" event={"ID":"90cd4aa4-003a-423a-a15b-1f0321375a34","Type":"ContainerDied","Data":"316c65c83b4dfe3cf08c96dd519e7d0bdbc3d746fb2b273c67025e18d5c0980a"} Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.495489 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316c65c83b4dfe3cf08c96dd519e7d0bdbc3d746fb2b273c67025e18d5c0980a" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.495580 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m2b52" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.691913 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-phjk9"] Jan 22 16:00:07 crc kubenswrapper[4825]: E0122 16:00:07.692370 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cd4aa4-003a-423a-a15b-1f0321375a34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.692388 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cd4aa4-003a-423a-a15b-1f0321375a34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:07 crc kubenswrapper[4825]: E0122 16:00:07.692405 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8932b91-b7cf-43d4-a324-1f6bb0271cfd" containerName="collect-profiles" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.692412 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8932b91-b7cf-43d4-a324-1f6bb0271cfd" containerName="collect-profiles" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.692604 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8932b91-b7cf-43d4-a324-1f6bb0271cfd" containerName="collect-profiles" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.692631 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="90cd4aa4-003a-423a-a15b-1f0321375a34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.693427 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.696042 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.696632 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.696875 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.697160 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.722767 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-phjk9"] Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.826334 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65kpg\" (UniqueName: \"kubernetes.io/projected/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-kube-api-access-65kpg\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.826447 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.826484 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.929056 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.929255 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.929692 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65kpg\" (UniqueName: \"kubernetes.io/projected/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-kube-api-access-65kpg\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.933553 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.942209 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:07 crc kubenswrapper[4825]: I0122 16:00:07.945218 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65kpg\" (UniqueName: \"kubernetes.io/projected/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-kube-api-access-65kpg\") pod \"ssh-known-hosts-edpm-deployment-phjk9\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:08 crc kubenswrapper[4825]: I0122 16:00:08.010000 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:08 crc kubenswrapper[4825]: I0122 16:00:08.608607 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-phjk9"] Jan 22 16:00:08 crc kubenswrapper[4825]: W0122 16:00:08.620218 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aa5c4d3_6bba_4019_a805_182fc8fa4efa.slice/crio-b161f68c22b407e10249b10416dbaba89124949bad8e6e15ca807c38fe93a5ff WatchSource:0}: Error finding container b161f68c22b407e10249b10416dbaba89124949bad8e6e15ca807c38fe93a5ff: Status 404 returned error can't find the container with id b161f68c22b407e10249b10416dbaba89124949bad8e6e15ca807c38fe93a5ff Jan 22 16:00:08 crc kubenswrapper[4825]: I0122 16:00:08.624964 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:00:09 crc kubenswrapper[4825]: I0122 16:00:09.532165 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" event={"ID":"7aa5c4d3-6bba-4019-a805-182fc8fa4efa","Type":"ContainerStarted","Data":"27562b5113ddd67136c6402b5976846c87191903e54f76d15d8cc4e452693488"} Jan 22 16:00:09 crc kubenswrapper[4825]: I0122 16:00:09.532366 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" event={"ID":"7aa5c4d3-6bba-4019-a805-182fc8fa4efa","Type":"ContainerStarted","Data":"b161f68c22b407e10249b10416dbaba89124949bad8e6e15ca807c38fe93a5ff"} Jan 22 16:00:09 crc kubenswrapper[4825]: I0122 16:00:09.550654 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" podStartSLOduration=1.9088900629999999 podStartE2EDuration="2.550630401s" podCreationTimestamp="2026-01-22 16:00:07 +0000 UTC" firstStartedPulling="2026-01-22 16:00:08.624669901 +0000 UTC m=+2155.386196811" lastFinishedPulling="2026-01-22 16:00:09.266410229 +0000 UTC m=+2156.027937149" observedRunningTime="2026-01-22 16:00:09.544444066 +0000 UTC m=+2156.305970986" watchObservedRunningTime="2026-01-22 16:00:09.550630401 +0000 UTC m=+2156.312157321" Jan 22 16:00:16 crc kubenswrapper[4825]: I0122 16:00:16.745076 4825 generic.go:334] "Generic (PLEG): container finished" podID="7aa5c4d3-6bba-4019-a805-182fc8fa4efa" containerID="27562b5113ddd67136c6402b5976846c87191903e54f76d15d8cc4e452693488" exitCode=0 Jan 22 16:00:16 crc kubenswrapper[4825]: I0122 16:00:16.745207 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" event={"ID":"7aa5c4d3-6bba-4019-a805-182fc8fa4efa","Type":"ContainerDied","Data":"27562b5113ddd67136c6402b5976846c87191903e54f76d15d8cc4e452693488"} Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.469609 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.559805 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-ssh-key-openstack-edpm-ipam\") pod \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.560032 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-inventory-0\") pod \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.560234 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65kpg\" (UniqueName: \"kubernetes.io/projected/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-kube-api-access-65kpg\") pod \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\" (UID: \"7aa5c4d3-6bba-4019-a805-182fc8fa4efa\") " Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.580122 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-kube-api-access-65kpg" (OuterVolumeSpecName: "kube-api-access-65kpg") pod "7aa5c4d3-6bba-4019-a805-182fc8fa4efa" (UID: "7aa5c4d3-6bba-4019-a805-182fc8fa4efa"). InnerVolumeSpecName "kube-api-access-65kpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.627554 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7aa5c4d3-6bba-4019-a805-182fc8fa4efa" (UID: "7aa5c4d3-6bba-4019-a805-182fc8fa4efa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.649488 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7aa5c4d3-6bba-4019-a805-182fc8fa4efa" (UID: "7aa5c4d3-6bba-4019-a805-182fc8fa4efa"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.668204 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.668237 4825 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.668249 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65kpg\" (UniqueName: \"kubernetes.io/projected/7aa5c4d3-6bba-4019-a805-182fc8fa4efa-kube-api-access-65kpg\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.773570 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" event={"ID":"7aa5c4d3-6bba-4019-a805-182fc8fa4efa","Type":"ContainerDied","Data":"b161f68c22b407e10249b10416dbaba89124949bad8e6e15ca807c38fe93a5ff"} Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.773629 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-phjk9" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.773668 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b161f68c22b407e10249b10416dbaba89124949bad8e6e15ca807c38fe93a5ff" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.866794 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl"] Jan 22 16:00:18 crc kubenswrapper[4825]: E0122 16:00:18.867232 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa5c4d3-6bba-4019-a805-182fc8fa4efa" containerName="ssh-known-hosts-edpm-deployment" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.867249 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa5c4d3-6bba-4019-a805-182fc8fa4efa" containerName="ssh-known-hosts-edpm-deployment" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.867501 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa5c4d3-6bba-4019-a805-182fc8fa4efa" containerName="ssh-known-hosts-edpm-deployment" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.868342 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.870476 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.870614 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.871837 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.872011 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.878808 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl"] Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.974694 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.974752 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7v7r\" (UniqueName: \"kubernetes.io/projected/36cb581a-e6c1-479e-ad47-efcba7182aef-kube-api-access-s7v7r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:18 crc kubenswrapper[4825]: I0122 16:00:18.974786 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.075942 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.076275 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7v7r\" (UniqueName: \"kubernetes.io/projected/36cb581a-e6c1-479e-ad47-efcba7182aef-kube-api-access-s7v7r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.076328 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.079784 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.079790 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.094842 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7v7r\" (UniqueName: \"kubernetes.io/projected/36cb581a-e6c1-479e-ad47-efcba7182aef-kube-api-access-s7v7r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v49kl\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.201924 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:19 crc kubenswrapper[4825]: W0122 16:00:19.801233 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cb581a_e6c1_479e_ad47_efcba7182aef.slice/crio-61e8e0d218572c0432717df585ea2b2c73b3ba93abf9813bd11977e538f76652 WatchSource:0}: Error finding container 61e8e0d218572c0432717df585ea2b2c73b3ba93abf9813bd11977e538f76652: Status 404 returned error can't find the container with id 61e8e0d218572c0432717df585ea2b2c73b3ba93abf9813bd11977e538f76652 Jan 22 16:00:19 crc kubenswrapper[4825]: I0122 16:00:19.802358 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl"] Jan 22 16:00:20 crc kubenswrapper[4825]: I0122 16:00:20.911310 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" event={"ID":"36cb581a-e6c1-479e-ad47-efcba7182aef","Type":"ContainerStarted","Data":"41b981a06697fa1fbc4360dd95554eed75c2f74d43956bea14b057a741a82aba"} Jan 22 16:00:20 crc kubenswrapper[4825]: I0122 16:00:20.911650 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" event={"ID":"36cb581a-e6c1-479e-ad47-efcba7182aef","Type":"ContainerStarted","Data":"61e8e0d218572c0432717df585ea2b2c73b3ba93abf9813bd11977e538f76652"} Jan 22 16:00:20 crc kubenswrapper[4825]: I0122 16:00:20.948605 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" podStartSLOduration=2.5221255 podStartE2EDuration="2.948582083s" podCreationTimestamp="2026-01-22 16:00:18 +0000 UTC" firstStartedPulling="2026-01-22 16:00:19.809341134 +0000 UTC m=+2166.570868054" lastFinishedPulling="2026-01-22 16:00:20.235797727 +0000 UTC m=+2166.997324637" observedRunningTime="2026-01-22 16:00:20.942048808 +0000 UTC m=+2167.703575738" watchObservedRunningTime="2026-01-22 16:00:20.948582083 +0000 UTC m=+2167.710109003" Jan 22 16:00:29 crc kubenswrapper[4825]: I0122 16:00:29.002363 4825 generic.go:334] "Generic (PLEG): container finished" podID="36cb581a-e6c1-479e-ad47-efcba7182aef" containerID="41b981a06697fa1fbc4360dd95554eed75c2f74d43956bea14b057a741a82aba" exitCode=0 Jan 22 16:00:29 crc kubenswrapper[4825]: I0122 16:00:29.002461 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" event={"ID":"36cb581a-e6c1-479e-ad47-efcba7182aef","Type":"ContainerDied","Data":"41b981a06697fa1fbc4360dd95554eed75c2f74d43956bea14b057a741a82aba"} Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.549516 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.661237 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-ssh-key-openstack-edpm-ipam\") pod \"36cb581a-e6c1-479e-ad47-efcba7182aef\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.661541 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-inventory\") pod \"36cb581a-e6c1-479e-ad47-efcba7182aef\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.661610 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7v7r\" (UniqueName: \"kubernetes.io/projected/36cb581a-e6c1-479e-ad47-efcba7182aef-kube-api-access-s7v7r\") pod \"36cb581a-e6c1-479e-ad47-efcba7182aef\" (UID: \"36cb581a-e6c1-479e-ad47-efcba7182aef\") " Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.685149 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cb581a-e6c1-479e-ad47-efcba7182aef-kube-api-access-s7v7r" (OuterVolumeSpecName: "kube-api-access-s7v7r") pod "36cb581a-e6c1-479e-ad47-efcba7182aef" (UID: "36cb581a-e6c1-479e-ad47-efcba7182aef"). InnerVolumeSpecName "kube-api-access-s7v7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.700930 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-inventory" (OuterVolumeSpecName: "inventory") pod "36cb581a-e6c1-479e-ad47-efcba7182aef" (UID: "36cb581a-e6c1-479e-ad47-efcba7182aef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.706502 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36cb581a-e6c1-479e-ad47-efcba7182aef" (UID: "36cb581a-e6c1-479e-ad47-efcba7182aef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.763853 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.763892 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7v7r\" (UniqueName: \"kubernetes.io/projected/36cb581a-e6c1-479e-ad47-efcba7182aef-kube-api-access-s7v7r\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:30 crc kubenswrapper[4825]: I0122 16:00:30.763903 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36cb581a-e6c1-479e-ad47-efcba7182aef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.038366 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" event={"ID":"36cb581a-e6c1-479e-ad47-efcba7182aef","Type":"ContainerDied","Data":"61e8e0d218572c0432717df585ea2b2c73b3ba93abf9813bd11977e538f76652"} Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.038423 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e8e0d218572c0432717df585ea2b2c73b3ba93abf9813bd11977e538f76652" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.038461 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v49kl" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.158258 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2"] Jan 22 16:00:31 crc kubenswrapper[4825]: E0122 16:00:31.158678 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cb581a-e6c1-479e-ad47-efcba7182aef" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.158696 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cb581a-e6c1-479e-ad47-efcba7182aef" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.158945 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cb581a-e6c1-479e-ad47-efcba7182aef" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.159675 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.162098 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.162323 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.162373 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.171213 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2"] Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.172759 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.274690 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.274906 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdzm\" (UniqueName: \"kubernetes.io/projected/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-kube-api-access-zqdzm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.275613 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.378393 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdzm\" (UniqueName: \"kubernetes.io/projected/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-kube-api-access-zqdzm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.378635 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.378798 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.384112 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.389613 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.403435 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdzm\" (UniqueName: \"kubernetes.io/projected/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-kube-api-access-zqdzm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:31 crc kubenswrapper[4825]: I0122 16:00:31.494097 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:32 crc kubenswrapper[4825]: I0122 16:00:32.033833 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2"] Jan 22 16:00:32 crc kubenswrapper[4825]: I0122 16:00:32.052478 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" event={"ID":"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1","Type":"ContainerStarted","Data":"41ed614f51209aec6b1050b234e32328ff06dbb7d92387cd494e127d14669773"} Jan 22 16:00:33 crc kubenswrapper[4825]: I0122 16:00:33.063740 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" event={"ID":"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1","Type":"ContainerStarted","Data":"a196106d7661d3ad1ecbdd5a56ffda41322c966010a09ce2e084665edfe08728"} Jan 22 16:00:33 crc kubenswrapper[4825]: I0122 16:00:33.085114 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" podStartSLOduration=1.5007825160000001 podStartE2EDuration="2.085086083s" podCreationTimestamp="2026-01-22 16:00:31 +0000 UTC" firstStartedPulling="2026-01-22 16:00:32.044616919 +0000 UTC m=+2178.806143829" lastFinishedPulling="2026-01-22 16:00:32.628920446 +0000 UTC m=+2179.390447396" observedRunningTime="2026-01-22 16:00:33.080208085 +0000 UTC m=+2179.841735035" watchObservedRunningTime="2026-01-22 16:00:33.085086083 +0000 UTC m=+2179.846613013" Jan 22 16:00:42 crc kubenswrapper[4825]: I0122 16:00:42.183165 4825 generic.go:334] "Generic (PLEG): container finished" podID="deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" containerID="a196106d7661d3ad1ecbdd5a56ffda41322c966010a09ce2e084665edfe08728" exitCode=0 Jan 22 16:00:42 crc kubenswrapper[4825]: I0122 16:00:42.183243 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" event={"ID":"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1","Type":"ContainerDied","Data":"a196106d7661d3ad1ecbdd5a56ffda41322c966010a09ce2e084665edfe08728"} Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.750816 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.879872 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqdzm\" (UniqueName: \"kubernetes.io/projected/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-kube-api-access-zqdzm\") pod \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.880095 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-inventory\") pod \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.880163 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-ssh-key-openstack-edpm-ipam\") pod \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\" (UID: \"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1\") " Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.889228 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-kube-api-access-zqdzm" (OuterVolumeSpecName: "kube-api-access-zqdzm") pod "deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" (UID: "deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1"). InnerVolumeSpecName "kube-api-access-zqdzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.921661 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" (UID: "deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.928717 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-inventory" (OuterVolumeSpecName: "inventory") pod "deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" (UID: "deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.984371 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.984402 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqdzm\" (UniqueName: \"kubernetes.io/projected/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-kube-api-access-zqdzm\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:43 crc kubenswrapper[4825]: I0122 16:00:43.984414 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.210020 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" event={"ID":"deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1","Type":"ContainerDied","Data":"41ed614f51209aec6b1050b234e32328ff06dbb7d92387cd494e127d14669773"} Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.210082 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ed614f51209aec6b1050b234e32328ff06dbb7d92387cd494e127d14669773" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.210092 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.366790 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm"] Jan 22 16:00:44 crc kubenswrapper[4825]: E0122 16:00:44.367351 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.367979 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.368714 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.369716 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.377079 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.377096 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.378894 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.379279 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.379425 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.379590 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.379763 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.379906 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395071 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395135 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395195 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395296 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395352 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vwt8\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-kube-api-access-7vwt8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395386 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395476 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395548 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395595 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.395845 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.396027 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.396094 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.396147 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.406201 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm"] Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498628 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498690 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498715 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498739 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498776 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498802 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498834 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498874 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498907 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vwt8\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-kube-api-access-7vwt8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.498967 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.499007 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.499047 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.499101 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.499126 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.503924 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.503924 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.504835 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.505444 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.505588 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.506080 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.506597 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.506890 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.507507 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.507873 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.511841 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.512486 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.515239 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.530730 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vwt8\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-kube-api-access-7vwt8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52jcm\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.692803 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:00:44 crc kubenswrapper[4825]: I0122 16:00:44.817940 4825 scope.go:117] "RemoveContainer" containerID="0b3cf3720325cc7d91844eb51ef35e896132c208ab98b5ec8eb68cf404526e03" Jan 22 16:00:45 crc kubenswrapper[4825]: I0122 16:00:45.247096 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm"] Jan 22 16:00:46 crc kubenswrapper[4825]: I0122 16:00:46.238697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" event={"ID":"7fee2632-6167-4d03-adb1-b103201abb59","Type":"ContainerStarted","Data":"5101501993c1da6764e0e07de04a8457425e156023ea72a3167b2e456599f06f"} Jan 22 16:00:46 crc kubenswrapper[4825]: I0122 16:00:46.238753 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" event={"ID":"7fee2632-6167-4d03-adb1-b103201abb59","Type":"ContainerStarted","Data":"1557cb432fc768173571472c88933983ca98bd5495f2edf7abf5a42c832a2065"} Jan 22 16:00:46 crc kubenswrapper[4825]: I0122 16:00:46.266295 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" podStartSLOduration=1.808940255 podStartE2EDuration="2.266266816s" podCreationTimestamp="2026-01-22 16:00:44 +0000 UTC" firstStartedPulling="2026-01-22 16:00:45.25391965 +0000 UTC m=+2192.015446570" lastFinishedPulling="2026-01-22 16:00:45.711246221 +0000 UTC m=+2192.472773131" observedRunningTime="2026-01-22 16:00:46.262563441 +0000 UTC m=+2193.024090351" watchObservedRunningTime="2026-01-22 16:00:46.266266816 +0000 UTC m=+2193.027793736" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.162156 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29484961-4jx79"] Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.164957 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.203083 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484961-4jx79"] Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.260029 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-combined-ca-bundle\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.260132 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkghs\" (UniqueName: \"kubernetes.io/projected/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-kube-api-access-qkghs\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.260569 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-config-data\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.260654 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-fernet-keys\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.363034 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-combined-ca-bundle\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.363147 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkghs\" (UniqueName: \"kubernetes.io/projected/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-kube-api-access-qkghs\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.363697 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-config-data\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.364388 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-fernet-keys\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.371184 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-combined-ca-bundle\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.371243 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-config-data\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.371932 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-fernet-keys\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.382119 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkghs\" (UniqueName: \"kubernetes.io/projected/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-kube-api-access-qkghs\") pod \"keystone-cron-29484961-4jx79\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:00 crc kubenswrapper[4825]: I0122 16:01:00.508727 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.005045 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484961-4jx79"] Jan 22 16:01:01 crc kubenswrapper[4825]: W0122 16:01:01.012642 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7fdc09c_d5ae_4def_a124_b7e5e8a0b23f.slice/crio-b49994ef51393a7546dbd946ab017e32a79db57fabeb76bfdfb89a3b58dcdf7b WatchSource:0}: Error finding container b49994ef51393a7546dbd946ab017e32a79db57fabeb76bfdfb89a3b58dcdf7b: Status 404 returned error can't find the container with id b49994ef51393a7546dbd946ab017e32a79db57fabeb76bfdfb89a3b58dcdf7b Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.409387 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484961-4jx79" event={"ID":"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f","Type":"ContainerStarted","Data":"df7f09dc95306d051bcfd36ef5579bb188b1aa492534221b16e9e010231ba92c"} Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.409854 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484961-4jx79" event={"ID":"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f","Type":"ContainerStarted","Data":"b49994ef51393a7546dbd946ab017e32a79db57fabeb76bfdfb89a3b58dcdf7b"} Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.446364 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5dq4"] Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.448280 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29484961-4jx79" podStartSLOduration=1.448257864 podStartE2EDuration="1.448257864s" podCreationTimestamp="2026-01-22 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:01:01.431580131 +0000 UTC m=+2208.193107081" watchObservedRunningTime="2026-01-22 16:01:01.448257864 +0000 UTC m=+2208.209784774" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.450471 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.491878 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52bd\" (UniqueName: \"kubernetes.io/projected/59d20760-d852-43bf-8857-09187846b120-kube-api-access-b52bd\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.492390 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-catalog-content\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.492621 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-utilities\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.492921 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5dq4"] Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.594789 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52bd\" (UniqueName: \"kubernetes.io/projected/59d20760-d852-43bf-8857-09187846b120-kube-api-access-b52bd\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.594884 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-catalog-content\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.594952 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-utilities\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.596257 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-utilities\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.596399 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-catalog-content\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.616021 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52bd\" (UniqueName: \"kubernetes.io/projected/59d20760-d852-43bf-8857-09187846b120-kube-api-access-b52bd\") pod \"redhat-marketplace-l5dq4\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:01 crc kubenswrapper[4825]: I0122 16:01:01.784685 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:02 crc kubenswrapper[4825]: I0122 16:01:02.420724 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5dq4"] Jan 22 16:01:02 crc kubenswrapper[4825]: W0122 16:01:02.425605 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59d20760_d852_43bf_8857_09187846b120.slice/crio-5e05bcca7d1718b8f03820d457105310ad7f158deb0197b1bd42ebb6d78e363f WatchSource:0}: Error finding container 5e05bcca7d1718b8f03820d457105310ad7f158deb0197b1bd42ebb6d78e363f: Status 404 returned error can't find the container with id 5e05bcca7d1718b8f03820d457105310ad7f158deb0197b1bd42ebb6d78e363f Jan 22 16:01:03 crc kubenswrapper[4825]: I0122 16:01:03.723863 4825 generic.go:334] "Generic (PLEG): container finished" podID="59d20760-d852-43bf-8857-09187846b120" containerID="a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724" exitCode=0 Jan 22 16:01:03 crc kubenswrapper[4825]: I0122 16:01:03.723911 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerDied","Data":"a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724"} Jan 22 16:01:03 crc kubenswrapper[4825]: I0122 16:01:03.724140 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerStarted","Data":"5e05bcca7d1718b8f03820d457105310ad7f158deb0197b1bd42ebb6d78e363f"} Jan 22 16:01:04 crc kubenswrapper[4825]: I0122 16:01:04.738233 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerStarted","Data":"929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84"} Jan 22 16:01:04 crc kubenswrapper[4825]: I0122 16:01:04.742541 4825 generic.go:334] "Generic (PLEG): container finished" podID="b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" containerID="df7f09dc95306d051bcfd36ef5579bb188b1aa492534221b16e9e010231ba92c" exitCode=0 Jan 22 16:01:04 crc kubenswrapper[4825]: I0122 16:01:04.742601 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484961-4jx79" event={"ID":"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f","Type":"ContainerDied","Data":"df7f09dc95306d051bcfd36ef5579bb188b1aa492534221b16e9e010231ba92c"} Jan 22 16:01:05 crc kubenswrapper[4825]: I0122 16:01:05.784690 4825 generic.go:334] "Generic (PLEG): container finished" podID="59d20760-d852-43bf-8857-09187846b120" containerID="929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84" exitCode=0 Jan 22 16:01:05 crc kubenswrapper[4825]: I0122 16:01:05.786256 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerDied","Data":"929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84"} Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.206075 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.334702 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-config-data\") pod \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.334768 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkghs\" (UniqueName: \"kubernetes.io/projected/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-kube-api-access-qkghs\") pod \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.334996 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-combined-ca-bundle\") pod \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.335119 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-fernet-keys\") pod \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\" (UID: \"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f\") " Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.340867 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" (UID: "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.354157 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-kube-api-access-qkghs" (OuterVolumeSpecName: "kube-api-access-qkghs") pod "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" (UID: "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f"). InnerVolumeSpecName "kube-api-access-qkghs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.387520 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" (UID: "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.424933 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-config-data" (OuterVolumeSpecName: "config-data") pod "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" (UID: "b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.438602 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.438734 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkghs\" (UniqueName: \"kubernetes.io/projected/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-kube-api-access-qkghs\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.438843 4825 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.438936 4825 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.797973 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484961-4jx79" event={"ID":"b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f","Type":"ContainerDied","Data":"b49994ef51393a7546dbd946ab017e32a79db57fabeb76bfdfb89a3b58dcdf7b"} Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.798037 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b49994ef51393a7546dbd946ab017e32a79db57fabeb76bfdfb89a3b58dcdf7b" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.797999 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484961-4jx79" Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.802091 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerStarted","Data":"99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae"} Jan 22 16:01:06 crc kubenswrapper[4825]: I0122 16:01:06.835377 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5dq4" podStartSLOduration=3.381687898 podStartE2EDuration="5.83535762s" podCreationTimestamp="2026-01-22 16:01:01 +0000 UTC" firstStartedPulling="2026-01-22 16:01:03.728126225 +0000 UTC m=+2210.489653135" lastFinishedPulling="2026-01-22 16:01:06.181795947 +0000 UTC m=+2212.943322857" observedRunningTime="2026-01-22 16:01:06.82868777 +0000 UTC m=+2213.590214680" watchObservedRunningTime="2026-01-22 16:01:06.83535762 +0000 UTC m=+2213.596884530" Jan 22 16:01:11 crc kubenswrapper[4825]: I0122 16:01:11.785939 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:11 crc kubenswrapper[4825]: I0122 16:01:11.786831 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:11 crc kubenswrapper[4825]: I0122 16:01:11.867115 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:11 crc kubenswrapper[4825]: I0122 16:01:11.952082 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:12 crc kubenswrapper[4825]: I0122 16:01:12.119082 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5dq4"] Jan 22 16:01:13 crc kubenswrapper[4825]: I0122 16:01:13.892316 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5dq4" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="registry-server" containerID="cri-o://99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae" gracePeriod=2 Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.474447 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.647125 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-catalog-content\") pod \"59d20760-d852-43bf-8857-09187846b120\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.647192 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b52bd\" (UniqueName: \"kubernetes.io/projected/59d20760-d852-43bf-8857-09187846b120-kube-api-access-b52bd\") pod \"59d20760-d852-43bf-8857-09187846b120\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.647257 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-utilities\") pod \"59d20760-d852-43bf-8857-09187846b120\" (UID: \"59d20760-d852-43bf-8857-09187846b120\") " Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.648278 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-utilities" (OuterVolumeSpecName: "utilities") pod "59d20760-d852-43bf-8857-09187846b120" (UID: "59d20760-d852-43bf-8857-09187846b120"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.652912 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d20760-d852-43bf-8857-09187846b120-kube-api-access-b52bd" (OuterVolumeSpecName: "kube-api-access-b52bd") pod "59d20760-d852-43bf-8857-09187846b120" (UID: "59d20760-d852-43bf-8857-09187846b120"). InnerVolumeSpecName "kube-api-access-b52bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.686143 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59d20760-d852-43bf-8857-09187846b120" (UID: "59d20760-d852-43bf-8857-09187846b120"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.749476 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.749510 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b52bd\" (UniqueName: \"kubernetes.io/projected/59d20760-d852-43bf-8857-09187846b120-kube-api-access-b52bd\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.749525 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d20760-d852-43bf-8857-09187846b120-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.909301 4825 generic.go:334] "Generic (PLEG): container finished" podID="59d20760-d852-43bf-8857-09187846b120" containerID="99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae" exitCode=0 Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.909353 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerDied","Data":"99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae"} Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.909394 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5dq4" event={"ID":"59d20760-d852-43bf-8857-09187846b120","Type":"ContainerDied","Data":"5e05bcca7d1718b8f03820d457105310ad7f158deb0197b1bd42ebb6d78e363f"} Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.909357 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5dq4" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.909417 4825 scope.go:117] "RemoveContainer" containerID="99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.968305 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5dq4"] Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.970796 4825 scope.go:117] "RemoveContainer" containerID="929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84" Jan 22 16:01:14 crc kubenswrapper[4825]: I0122 16:01:14.998244 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5dq4"] Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.000285 4825 scope.go:117] "RemoveContainer" containerID="a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.050177 4825 scope.go:117] "RemoveContainer" containerID="99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae" Jan 22 16:01:15 crc kubenswrapper[4825]: E0122 16:01:15.050775 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae\": container with ID starting with 99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae not found: ID does not exist" containerID="99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.050841 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae"} err="failed to get container status \"99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae\": rpc error: code = NotFound desc = could not find container \"99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae\": container with ID starting with 99707f6c99c44aecaee74aac974aa3d8c0f4e81dde2e61883d1f234b155f31ae not found: ID does not exist" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.050885 4825 scope.go:117] "RemoveContainer" containerID="929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84" Jan 22 16:01:15 crc kubenswrapper[4825]: E0122 16:01:15.051466 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84\": container with ID starting with 929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84 not found: ID does not exist" containerID="929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.051507 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84"} err="failed to get container status \"929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84\": rpc error: code = NotFound desc = could not find container \"929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84\": container with ID starting with 929c567d1070bca29ac0cc9fc97bba79c9f170fedb447061a3520ec80ec0ac84 not found: ID does not exist" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.051539 4825 scope.go:117] "RemoveContainer" containerID="a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724" Jan 22 16:01:15 crc kubenswrapper[4825]: E0122 16:01:15.051953 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724\": container with ID starting with a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724 not found: ID does not exist" containerID="a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.052021 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724"} err="failed to get container status \"a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724\": rpc error: code = NotFound desc = could not find container \"a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724\": container with ID starting with a0decd06970c32f1b358fbf20b5fc1409124070998d52ab82799b6897755d724 not found: ID does not exist" Jan 22 16:01:15 crc kubenswrapper[4825]: I0122 16:01:15.529137 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d20760-d852-43bf-8857-09187846b120" path="/var/lib/kubelet/pods/59d20760-d852-43bf-8857-09187846b120/volumes" Jan 22 16:01:23 crc kubenswrapper[4825]: I0122 16:01:23.003015 4825 generic.go:334] "Generic (PLEG): container finished" podID="7fee2632-6167-4d03-adb1-b103201abb59" containerID="5101501993c1da6764e0e07de04a8457425e156023ea72a3167b2e456599f06f" exitCode=0 Jan 22 16:01:23 crc kubenswrapper[4825]: I0122 16:01:23.003151 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" event={"ID":"7fee2632-6167-4d03-adb1-b103201abb59","Type":"ContainerDied","Data":"5101501993c1da6764e0e07de04a8457425e156023ea72a3167b2e456599f06f"} Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.498556 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633591 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-telemetry-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633679 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-neutron-metadata-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633740 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633795 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ovn-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633826 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-nova-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633910 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ssh-key-openstack-edpm-ipam\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.633966 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-bootstrap-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634057 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634127 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634213 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-repo-setup-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634235 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vwt8\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-kube-api-access-7vwt8\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634260 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-inventory\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634296 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-libvirt-combined-ca-bundle\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.634325 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7fee2632-6167-4d03-adb1-b103201abb59\" (UID: \"7fee2632-6167-4d03-adb1-b103201abb59\") " Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.640529 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-kube-api-access-7vwt8" (OuterVolumeSpecName: "kube-api-access-7vwt8") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "kube-api-access-7vwt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.640592 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.641421 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.642508 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.643522 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.643781 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.644204 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.645969 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.646395 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.646524 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.647131 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.648453 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.674894 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-inventory" (OuterVolumeSpecName: "inventory") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.702625 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fee2632-6167-4d03-adb1-b103201abb59" (UID: "7fee2632-6167-4d03-adb1-b103201abb59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.737520 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.737806 4825 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.737907 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vwt8\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-kube-api-access-7vwt8\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738013 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738140 4825 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738249 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738346 4825 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738432 4825 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738675 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738857 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.738967 4825 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.739075 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.739210 4825 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee2632-6167-4d03-adb1-b103201abb59-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:24 crc kubenswrapper[4825]: I0122 16:01:24.739294 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7fee2632-6167-4d03-adb1-b103201abb59-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.030603 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" event={"ID":"7fee2632-6167-4d03-adb1-b103201abb59","Type":"ContainerDied","Data":"1557cb432fc768173571472c88933983ca98bd5495f2edf7abf5a42c832a2065"} Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.031040 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1557cb432fc768173571472c88933983ca98bd5495f2edf7abf5a42c832a2065" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.030729 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52jcm" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.184956 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv"] Jan 22 16:01:25 crc kubenswrapper[4825]: E0122 16:01:25.186041 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" containerName="keystone-cron" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186066 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" containerName="keystone-cron" Jan 22 16:01:25 crc kubenswrapper[4825]: E0122 16:01:25.186101 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="extract-utilities" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186111 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="extract-utilities" Jan 22 16:01:25 crc kubenswrapper[4825]: E0122 16:01:25.186131 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="registry-server" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186141 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="registry-server" Jan 22 16:01:25 crc kubenswrapper[4825]: E0122 16:01:25.186176 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="extract-content" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186186 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="extract-content" Jan 22 16:01:25 crc kubenswrapper[4825]: E0122 16:01:25.186224 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fee2632-6167-4d03-adb1-b103201abb59" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186236 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fee2632-6167-4d03-adb1-b103201abb59" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186855 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f" containerName="keystone-cron" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186917 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d20760-d852-43bf-8857-09187846b120" containerName="registry-server" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.186949 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fee2632-6167-4d03-adb1-b103201abb59" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.188851 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.192526 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.193026 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.194419 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.194698 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.196264 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.206526 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv"] Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.256637 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.256699 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.256731 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwkv\" (UniqueName: \"kubernetes.io/projected/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-kube-api-access-2nwkv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.256756 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.256807 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.359152 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwkv\" (UniqueName: \"kubernetes.io/projected/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-kube-api-access-2nwkv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.359207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.359269 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.359404 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.359437 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.360448 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.363561 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.363965 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.373772 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.374700 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwkv\" (UniqueName: \"kubernetes.io/projected/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-kube-api-access-2nwkv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bb5mv\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:25 crc kubenswrapper[4825]: I0122 16:01:25.519564 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:01:26 crc kubenswrapper[4825]: I0122 16:01:26.110209 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv"] Jan 22 16:01:27 crc kubenswrapper[4825]: I0122 16:01:27.049922 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" event={"ID":"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8","Type":"ContainerStarted","Data":"d877694c6790999281ff2bebb1b4350500b23b742ecb7ccd191907a8e666819b"} Jan 22 16:01:27 crc kubenswrapper[4825]: I0122 16:01:27.050444 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" event={"ID":"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8","Type":"ContainerStarted","Data":"d10721e97768b1168a5310bea14b1c2ab67a714b47b6a7a5ce47a86d35988082"} Jan 22 16:01:27 crc kubenswrapper[4825]: I0122 16:01:27.076691 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" podStartSLOduration=1.6003148390000002 podStartE2EDuration="2.076670648s" podCreationTimestamp="2026-01-22 16:01:25 +0000 UTC" firstStartedPulling="2026-01-22 16:01:26.100352632 +0000 UTC m=+2232.861879552" lastFinishedPulling="2026-01-22 16:01:26.576708451 +0000 UTC m=+2233.338235361" observedRunningTime="2026-01-22 16:01:27.069287749 +0000 UTC m=+2233.830814659" watchObservedRunningTime="2026-01-22 16:01:27.076670648 +0000 UTC m=+2233.838197558" Jan 22 16:02:35 crc kubenswrapper[4825]: I0122 16:02:35.542367 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:02:35 crc kubenswrapper[4825]: I0122 16:02:35.543254 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:02:37 crc kubenswrapper[4825]: I0122 16:02:37.083508 4825 generic.go:334] "Generic (PLEG): container finished" podID="3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" containerID="d877694c6790999281ff2bebb1b4350500b23b742ecb7ccd191907a8e666819b" exitCode=0 Jan 22 16:02:37 crc kubenswrapper[4825]: I0122 16:02:37.083601 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" event={"ID":"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8","Type":"ContainerDied","Data":"d877694c6790999281ff2bebb1b4350500b23b742ecb7ccd191907a8e666819b"} Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.573895 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.747326 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovncontroller-config-0\") pod \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.747497 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-inventory\") pod \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.747639 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwkv\" (UniqueName: \"kubernetes.io/projected/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-kube-api-access-2nwkv\") pod \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.747666 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovn-combined-ca-bundle\") pod \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.747802 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ssh-key-openstack-edpm-ipam\") pod \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\" (UID: \"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8\") " Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.763953 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" (UID: "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.777192 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-kube-api-access-2nwkv" (OuterVolumeSpecName: "kube-api-access-2nwkv") pod "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" (UID: "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8"). InnerVolumeSpecName "kube-api-access-2nwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.779859 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" (UID: "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.783313 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" (UID: "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:02:38 crc kubenswrapper[4825]: I0122 16:02:38.797257 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-inventory" (OuterVolumeSpecName: "inventory") pod "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" (UID: "3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.039238 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.039282 4825 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.039299 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.039308 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwkv\" (UniqueName: \"kubernetes.io/projected/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-kube-api-access-2nwkv\") on node \"crc\" DevicePath \"\"" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.039318 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.105251 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" event={"ID":"3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8","Type":"ContainerDied","Data":"d10721e97768b1168a5310bea14b1c2ab67a714b47b6a7a5ce47a86d35988082"} Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.105334 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bb5mv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.105352 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d10721e97768b1168a5310bea14b1c2ab67a714b47b6a7a5ce47a86d35988082" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.309709 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv"] Jan 22 16:02:39 crc kubenswrapper[4825]: E0122 16:02:39.310261 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.310285 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.310543 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.311624 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.314052 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.314220 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.314243 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.315247 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.315296 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.316159 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.324290 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv"] Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.447294 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.447478 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.447549 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.447594 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.447666 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.447739 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhv2\" (UniqueName: \"kubernetes.io/projected/6300fe1a-799f-43a4-943b-b62dc552c5fb-kube-api-access-7dhv2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.549755 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.549797 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.549846 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.549893 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.550602 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.550666 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dhv2\" (UniqueName: \"kubernetes.io/projected/6300fe1a-799f-43a4-943b-b62dc552c5fb-kube-api-access-7dhv2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.554410 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.558971 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.567405 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.568923 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.569788 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dhv2\" (UniqueName: \"kubernetes.io/projected/6300fe1a-799f-43a4-943b-b62dc552c5fb-kube-api-access-7dhv2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.572756 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:39 crc kubenswrapper[4825]: I0122 16:02:39.634425 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:02:40 crc kubenswrapper[4825]: I0122 16:02:40.524537 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv"] Jan 22 16:02:41 crc kubenswrapper[4825]: I0122 16:02:41.150655 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" event={"ID":"6300fe1a-799f-43a4-943b-b62dc552c5fb","Type":"ContainerStarted","Data":"646cf5bcc6a787cf2b5defd7633feb6462c87005e86e2ef8f9174a553c529372"} Jan 22 16:02:42 crc kubenswrapper[4825]: I0122 16:02:42.163944 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" event={"ID":"6300fe1a-799f-43a4-943b-b62dc552c5fb","Type":"ContainerStarted","Data":"fe74bb34b2e6e0bb8925a0cf9256fa41ec020c7a36b4067b76318b48d5d50d86"} Jan 22 16:02:42 crc kubenswrapper[4825]: I0122 16:02:42.201392 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" podStartSLOduration=2.7407708079999997 podStartE2EDuration="3.201351449s" podCreationTimestamp="2026-01-22 16:02:39 +0000 UTC" firstStartedPulling="2026-01-22 16:02:40.55036191 +0000 UTC m=+2307.311888830" lastFinishedPulling="2026-01-22 16:02:41.010942511 +0000 UTC m=+2307.772469471" observedRunningTime="2026-01-22 16:02:42.18518697 +0000 UTC m=+2308.946713880" watchObservedRunningTime="2026-01-22 16:02:42.201351449 +0000 UTC m=+2308.962878369" Jan 22 16:03:05 crc kubenswrapper[4825]: I0122 16:03:05.541698 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:03:05 crc kubenswrapper[4825]: I0122 16:03:05.542235 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:03:34 crc kubenswrapper[4825]: I0122 16:03:34.087674 4825 generic.go:334] "Generic (PLEG): container finished" podID="6300fe1a-799f-43a4-943b-b62dc552c5fb" containerID="fe74bb34b2e6e0bb8925a0cf9256fa41ec020c7a36b4067b76318b48d5d50d86" exitCode=0 Jan 22 16:03:34 crc kubenswrapper[4825]: I0122 16:03:34.087786 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" event={"ID":"6300fe1a-799f-43a4-943b-b62dc552c5fb","Type":"ContainerDied","Data":"fe74bb34b2e6e0bb8925a0cf9256fa41ec020c7a36b4067b76318b48d5d50d86"} Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.534128 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.542490 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.542557 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.542605 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.631122 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6300fe1a-799f-43a4-943b-b62dc552c5fb\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.631377 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-ssh-key-openstack-edpm-ipam\") pod \"6300fe1a-799f-43a4-943b-b62dc552c5fb\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.631505 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-metadata-combined-ca-bundle\") pod \"6300fe1a-799f-43a4-943b-b62dc552c5fb\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.631556 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-inventory\") pod \"6300fe1a-799f-43a4-943b-b62dc552c5fb\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.631583 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dhv2\" (UniqueName: \"kubernetes.io/projected/6300fe1a-799f-43a4-943b-b62dc552c5fb-kube-api-access-7dhv2\") pod \"6300fe1a-799f-43a4-943b-b62dc552c5fb\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.631616 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-nova-metadata-neutron-config-0\") pod \"6300fe1a-799f-43a4-943b-b62dc552c5fb\" (UID: \"6300fe1a-799f-43a4-943b-b62dc552c5fb\") " Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.639722 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6300fe1a-799f-43a4-943b-b62dc552c5fb" (UID: "6300fe1a-799f-43a4-943b-b62dc552c5fb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.644342 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6300fe1a-799f-43a4-943b-b62dc552c5fb-kube-api-access-7dhv2" (OuterVolumeSpecName: "kube-api-access-7dhv2") pod "6300fe1a-799f-43a4-943b-b62dc552c5fb" (UID: "6300fe1a-799f-43a4-943b-b62dc552c5fb"). InnerVolumeSpecName "kube-api-access-7dhv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.668374 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6300fe1a-799f-43a4-943b-b62dc552c5fb" (UID: "6300fe1a-799f-43a4-943b-b62dc552c5fb"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.669125 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6300fe1a-799f-43a4-943b-b62dc552c5fb" (UID: "6300fe1a-799f-43a4-943b-b62dc552c5fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.671227 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6300fe1a-799f-43a4-943b-b62dc552c5fb" (UID: "6300fe1a-799f-43a4-943b-b62dc552c5fb"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.677000 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-inventory" (OuterVolumeSpecName: "inventory") pod "6300fe1a-799f-43a4-943b-b62dc552c5fb" (UID: "6300fe1a-799f-43a4-943b-b62dc552c5fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.734837 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.735222 4825 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.735323 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.735411 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dhv2\" (UniqueName: \"kubernetes.io/projected/6300fe1a-799f-43a4-943b-b62dc552c5fb-kube-api-access-7dhv2\") on node \"crc\" DevicePath \"\"" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.735523 4825 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:03:35 crc kubenswrapper[4825]: I0122 16:03:35.735610 4825 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6300fe1a-799f-43a4-943b-b62dc552c5fb-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.109251 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" event={"ID":"6300fe1a-799f-43a4-943b-b62dc552c5fb","Type":"ContainerDied","Data":"646cf5bcc6a787cf2b5defd7633feb6462c87005e86e2ef8f9174a553c529372"} Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.109537 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="646cf5bcc6a787cf2b5defd7633feb6462c87005e86e2ef8f9174a553c529372" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.109327 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.110226 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.110330 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" gracePeriod=600 Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.283219 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx"] Jan 22 16:03:36 crc kubenswrapper[4825]: E0122 16:03:36.283933 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6300fe1a-799f-43a4-943b-b62dc552c5fb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.283955 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6300fe1a-799f-43a4-943b-b62dc552c5fb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.284236 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6300fe1a-799f-43a4-943b-b62dc552c5fb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.285327 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.287924 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.288562 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.288659 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.288787 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.289365 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.296900 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx"] Jan 22 16:03:36 crc kubenswrapper[4825]: E0122 16:03:36.300527 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.350585 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.350656 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.350892 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.351126 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.351231 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrlvp\" (UniqueName: \"kubernetes.io/projected/4289f922-fcbd-4485-8fca-83f858eb39a2-kube-api-access-wrlvp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.453907 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.454030 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.454073 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrlvp\" (UniqueName: \"kubernetes.io/projected/4289f922-fcbd-4485-8fca-83f858eb39a2-kube-api-access-wrlvp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.454243 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.454279 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.459069 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.459337 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.459529 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.463692 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.470767 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrlvp\" (UniqueName: \"kubernetes.io/projected/4289f922-fcbd-4485-8fca-83f858eb39a2-kube-api-access-wrlvp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:36 crc kubenswrapper[4825]: I0122 16:03:36.607633 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:03:37 crc kubenswrapper[4825]: I0122 16:03:37.126359 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" exitCode=0 Jan 22 16:03:37 crc kubenswrapper[4825]: I0122 16:03:37.126450 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec"} Jan 22 16:03:37 crc kubenswrapper[4825]: I0122 16:03:37.126691 4825 scope.go:117] "RemoveContainer" containerID="5bcc1d277e3ad443248de981b2ff45bbf7029c5fe07cb018b3784c1adec9e60c" Jan 22 16:03:37 crc kubenswrapper[4825]: I0122 16:03:37.127461 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:03:37 crc kubenswrapper[4825]: E0122 16:03:37.127802 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:03:37 crc kubenswrapper[4825]: I0122 16:03:37.231376 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx"] Jan 22 16:03:38 crc kubenswrapper[4825]: I0122 16:03:38.156415 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" event={"ID":"4289f922-fcbd-4485-8fca-83f858eb39a2","Type":"ContainerStarted","Data":"aaa5ddc39c83052cb82c2c94f9d4e185f53a84b6bbdaca843fbe36c2c749e0ee"} Jan 22 16:03:38 crc kubenswrapper[4825]: I0122 16:03:38.156741 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" event={"ID":"4289f922-fcbd-4485-8fca-83f858eb39a2","Type":"ContainerStarted","Data":"36f23983fbb84a17e9b6e7fc582b742581650e205fa7d3eb42213d25f3ac2291"} Jan 22 16:03:38 crc kubenswrapper[4825]: I0122 16:03:38.181429 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" podStartSLOduration=1.5728018700000002 podStartE2EDuration="2.181410419s" podCreationTimestamp="2026-01-22 16:03:36 +0000 UTC" firstStartedPulling="2026-01-22 16:03:37.242137963 +0000 UTC m=+2364.003664873" lastFinishedPulling="2026-01-22 16:03:37.850746512 +0000 UTC m=+2364.612273422" observedRunningTime="2026-01-22 16:03:38.17932204 +0000 UTC m=+2364.940848970" watchObservedRunningTime="2026-01-22 16:03:38.181410419 +0000 UTC m=+2364.942937329" Jan 22 16:03:50 crc kubenswrapper[4825]: I0122 16:03:50.518311 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:03:50 crc kubenswrapper[4825]: E0122 16:03:50.520060 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:04:02 crc kubenswrapper[4825]: I0122 16:04:02.517314 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:04:02 crc kubenswrapper[4825]: E0122 16:04:02.518174 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:04:13 crc kubenswrapper[4825]: I0122 16:04:13.528924 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:04:13 crc kubenswrapper[4825]: E0122 16:04:13.530623 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:04:28 crc kubenswrapper[4825]: I0122 16:04:28.517734 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:04:28 crc kubenswrapper[4825]: E0122 16:04:28.518517 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:04:42 crc kubenswrapper[4825]: I0122 16:04:42.517630 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:04:42 crc kubenswrapper[4825]: E0122 16:04:42.518478 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:04:53 crc kubenswrapper[4825]: I0122 16:04:53.524717 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:04:53 crc kubenswrapper[4825]: E0122 16:04:53.526011 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:05:06 crc kubenswrapper[4825]: I0122 16:05:06.519076 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:05:06 crc kubenswrapper[4825]: E0122 16:05:06.519874 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:05:19 crc kubenswrapper[4825]: I0122 16:05:19.517082 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:05:19 crc kubenswrapper[4825]: E0122 16:05:19.517916 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:05:30 crc kubenswrapper[4825]: I0122 16:05:30.517025 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:05:30 crc kubenswrapper[4825]: E0122 16:05:30.517944 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:05:41 crc kubenswrapper[4825]: I0122 16:05:41.519413 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:05:41 crc kubenswrapper[4825]: E0122 16:05:41.522394 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:05:45 crc kubenswrapper[4825]: I0122 16:05:45.005940 4825 scope.go:117] "RemoveContainer" containerID="b5bb438688d644068d6b1cbf0fa31dd2d1f408c201bdd24833d2a0d9812e3a63" Jan 22 16:05:45 crc kubenswrapper[4825]: I0122 16:05:45.045888 4825 scope.go:117] "RemoveContainer" containerID="13f6349a8f86cbd719ab6076b722e81e644f2d2da7bc2241949ef513f721348d" Jan 22 16:05:45 crc kubenswrapper[4825]: I0122 16:05:45.094597 4825 scope.go:117] "RemoveContainer" containerID="03e7f51e80f4670ada2b030d53972ea106c3bfa7412773428cc26c17076d1d7b" Jan 22 16:05:52 crc kubenswrapper[4825]: I0122 16:05:52.517490 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:05:52 crc kubenswrapper[4825]: E0122 16:05:52.518667 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:06:06 crc kubenswrapper[4825]: I0122 16:06:06.517402 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:06:06 crc kubenswrapper[4825]: E0122 16:06:06.519861 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:06:19 crc kubenswrapper[4825]: I0122 16:06:19.517215 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:06:19 crc kubenswrapper[4825]: E0122 16:06:19.519725 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:06:31 crc kubenswrapper[4825]: I0122 16:06:31.518351 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:06:31 crc kubenswrapper[4825]: E0122 16:06:31.519333 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:06:45 crc kubenswrapper[4825]: I0122 16:06:45.517583 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:06:45 crc kubenswrapper[4825]: E0122 16:06:45.518675 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:06:59 crc kubenswrapper[4825]: I0122 16:06:59.517323 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:06:59 crc kubenswrapper[4825]: E0122 16:06:59.518024 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:07:13 crc kubenswrapper[4825]: I0122 16:07:13.524448 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:07:13 crc kubenswrapper[4825]: E0122 16:07:13.526538 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:07:25 crc kubenswrapper[4825]: I0122 16:07:25.517722 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:07:25 crc kubenswrapper[4825]: E0122 16:07:25.518637 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:07:36 crc kubenswrapper[4825]: I0122 16:07:36.517590 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:07:36 crc kubenswrapper[4825]: E0122 16:07:36.518919 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:07:48 crc kubenswrapper[4825]: I0122 16:07:48.518546 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:07:48 crc kubenswrapper[4825]: E0122 16:07:48.520070 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:08:02 crc kubenswrapper[4825]: I0122 16:08:02.517617 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:08:02 crc kubenswrapper[4825]: E0122 16:08:02.519026 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:08:14 crc kubenswrapper[4825]: I0122 16:08:14.517342 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:08:14 crc kubenswrapper[4825]: E0122 16:08:14.518048 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:08:26 crc kubenswrapper[4825]: I0122 16:08:26.489992 4825 generic.go:334] "Generic (PLEG): container finished" podID="4289f922-fcbd-4485-8fca-83f858eb39a2" containerID="aaa5ddc39c83052cb82c2c94f9d4e185f53a84b6bbdaca843fbe36c2c749e0ee" exitCode=0 Jan 22 16:08:26 crc kubenswrapper[4825]: I0122 16:08:26.490049 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" event={"ID":"4289f922-fcbd-4485-8fca-83f858eb39a2","Type":"ContainerDied","Data":"aaa5ddc39c83052cb82c2c94f9d4e185f53a84b6bbdaca843fbe36c2c749e0ee"} Jan 22 16:08:27 crc kubenswrapper[4825]: I0122 16:08:27.518324 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:08:27 crc kubenswrapper[4825]: E0122 16:08:27.518967 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.062236 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.159092 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-combined-ca-bundle\") pod \"4289f922-fcbd-4485-8fca-83f858eb39a2\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.159295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrlvp\" (UniqueName: \"kubernetes.io/projected/4289f922-fcbd-4485-8fca-83f858eb39a2-kube-api-access-wrlvp\") pod \"4289f922-fcbd-4485-8fca-83f858eb39a2\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.159360 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-ssh-key-openstack-edpm-ipam\") pod \"4289f922-fcbd-4485-8fca-83f858eb39a2\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.159430 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-inventory\") pod \"4289f922-fcbd-4485-8fca-83f858eb39a2\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.159582 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-secret-0\") pod \"4289f922-fcbd-4485-8fca-83f858eb39a2\" (UID: \"4289f922-fcbd-4485-8fca-83f858eb39a2\") " Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.168613 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4289f922-fcbd-4485-8fca-83f858eb39a2" (UID: "4289f922-fcbd-4485-8fca-83f858eb39a2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.168957 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4289f922-fcbd-4485-8fca-83f858eb39a2-kube-api-access-wrlvp" (OuterVolumeSpecName: "kube-api-access-wrlvp") pod "4289f922-fcbd-4485-8fca-83f858eb39a2" (UID: "4289f922-fcbd-4485-8fca-83f858eb39a2"). InnerVolumeSpecName "kube-api-access-wrlvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.203972 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4289f922-fcbd-4485-8fca-83f858eb39a2" (UID: "4289f922-fcbd-4485-8fca-83f858eb39a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.219726 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-inventory" (OuterVolumeSpecName: "inventory") pod "4289f922-fcbd-4485-8fca-83f858eb39a2" (UID: "4289f922-fcbd-4485-8fca-83f858eb39a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.233642 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4289f922-fcbd-4485-8fca-83f858eb39a2" (UID: "4289f922-fcbd-4485-8fca-83f858eb39a2"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.263198 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrlvp\" (UniqueName: \"kubernetes.io/projected/4289f922-fcbd-4485-8fca-83f858eb39a2-kube-api-access-wrlvp\") on node \"crc\" DevicePath \"\"" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.263243 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.263260 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.263274 4825 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.263291 4825 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4289f922-fcbd-4485-8fca-83f858eb39a2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.513629 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" event={"ID":"4289f922-fcbd-4485-8fca-83f858eb39a2","Type":"ContainerDied","Data":"36f23983fbb84a17e9b6e7fc582b742581650e205fa7d3eb42213d25f3ac2291"} Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.513675 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f23983fbb84a17e9b6e7fc582b742581650e205fa7d3eb42213d25f3ac2291" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.513766 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.638694 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4"] Jan 22 16:08:28 crc kubenswrapper[4825]: E0122 16:08:28.639222 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4289f922-fcbd-4485-8fca-83f858eb39a2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.639240 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4289f922-fcbd-4485-8fca-83f858eb39a2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.639520 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4289f922-fcbd-4485-8fca-83f858eb39a2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.640403 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.643689 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.643873 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.643891 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.645360 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.645470 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.645634 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.647552 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.654597 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4"] Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.806972 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807047 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807065 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807106 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807444 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807517 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4wl\" (UniqueName: \"kubernetes.io/projected/e461766f-09e2-4b85-87e7-9e5048f701cd-kube-api-access-4n4wl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807554 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807708 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.807787 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909462 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909557 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909636 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909716 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909751 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909815 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.909944 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.910031 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4wl\" (UniqueName: \"kubernetes.io/projected/e461766f-09e2-4b85-87e7-9e5048f701cd-kube-api-access-4n4wl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.910076 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.910595 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.915666 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.916229 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.916332 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.917443 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.917486 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.917662 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.917826 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.932521 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4wl\" (UniqueName: \"kubernetes.io/projected/e461766f-09e2-4b85-87e7-9e5048f701cd-kube-api-access-4n4wl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nc2g4\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:28 crc kubenswrapper[4825]: I0122 16:08:28.964363 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:08:29 crc kubenswrapper[4825]: I0122 16:08:29.668083 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4"] Jan 22 16:08:29 crc kubenswrapper[4825]: I0122 16:08:29.681001 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:08:30 crc kubenswrapper[4825]: I0122 16:08:30.562457 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" event={"ID":"e461766f-09e2-4b85-87e7-9e5048f701cd","Type":"ContainerStarted","Data":"f93e81bfeb286704e5a8c25955e5e6a297b4150ff55f70e5944707978327db28"} Jan 22 16:08:30 crc kubenswrapper[4825]: I0122 16:08:30.563238 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" event={"ID":"e461766f-09e2-4b85-87e7-9e5048f701cd","Type":"ContainerStarted","Data":"b463d2b9b8e763a527c0c273e5511e3af11a5a33e556a73491ef7d842b7aefa8"} Jan 22 16:08:30 crc kubenswrapper[4825]: I0122 16:08:30.595661 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" podStartSLOduration=2.104988024 podStartE2EDuration="2.595634517s" podCreationTimestamp="2026-01-22 16:08:28 +0000 UTC" firstStartedPulling="2026-01-22 16:08:29.680626365 +0000 UTC m=+2656.442153285" lastFinishedPulling="2026-01-22 16:08:30.171272868 +0000 UTC m=+2656.932799778" observedRunningTime="2026-01-22 16:08:30.594728492 +0000 UTC m=+2657.356255412" watchObservedRunningTime="2026-01-22 16:08:30.595634517 +0000 UTC m=+2657.357161437" Jan 22 16:08:42 crc kubenswrapper[4825]: I0122 16:08:42.517483 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:08:43 crc kubenswrapper[4825]: I0122 16:08:43.808603 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"2c3066f9e0f387705d530f10b82fa5e48f74d5b1b2427dd55665164230f71184"} Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.076947 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.080702 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.101064 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.122140 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-catalog-content\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.122183 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-utilities\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.122222 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4xf\" (UniqueName: \"kubernetes.io/projected/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-kube-api-access-gz4xf\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.224308 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-catalog-content\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.224364 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-utilities\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.224426 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4xf\" (UniqueName: \"kubernetes.io/projected/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-kube-api-access-gz4xf\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.225500 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-catalog-content\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.225827 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-utilities\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.245379 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4xf\" (UniqueName: \"kubernetes.io/projected/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-kube-api-access-gz4xf\") pod \"certified-operators-lr5ws\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:49 crc kubenswrapper[4825]: I0122 16:08:49.409423 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:50 crc kubenswrapper[4825]: I0122 16:08:50.107424 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:08:50 crc kubenswrapper[4825]: W0122 16:08:50.108031 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5fee3b_6b13_47f7_aa99_8ac3068afb93.slice/crio-5c77530ee42392af2fe1afccf222d3c5ed049f3304f2d00f9496c6391c05969b WatchSource:0}: Error finding container 5c77530ee42392af2fe1afccf222d3c5ed049f3304f2d00f9496c6391c05969b: Status 404 returned error can't find the container with id 5c77530ee42392af2fe1afccf222d3c5ed049f3304f2d00f9496c6391c05969b Jan 22 16:08:50 crc kubenswrapper[4825]: I0122 16:08:50.878787 4825 generic.go:334] "Generic (PLEG): container finished" podID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerID="069fdba9e8f149b9cdefd147fb0cffcbb3e5ade662800f827fd6b58c18257e59" exitCode=0 Jan 22 16:08:50 crc kubenswrapper[4825]: I0122 16:08:50.879023 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerDied","Data":"069fdba9e8f149b9cdefd147fb0cffcbb3e5ade662800f827fd6b58c18257e59"} Jan 22 16:08:50 crc kubenswrapper[4825]: I0122 16:08:50.879193 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerStarted","Data":"5c77530ee42392af2fe1afccf222d3c5ed049f3304f2d00f9496c6391c05969b"} Jan 22 16:08:55 crc kubenswrapper[4825]: I0122 16:08:55.935442 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerStarted","Data":"c3817c6b494acc4e865e2559eabeef9896f0b8a87dd450a6c3368571bfedb3fa"} Jan 22 16:08:56 crc kubenswrapper[4825]: I0122 16:08:56.950825 4825 generic.go:334] "Generic (PLEG): container finished" podID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerID="c3817c6b494acc4e865e2559eabeef9896f0b8a87dd450a6c3368571bfedb3fa" exitCode=0 Jan 22 16:08:56 crc kubenswrapper[4825]: I0122 16:08:56.950901 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerDied","Data":"c3817c6b494acc4e865e2559eabeef9896f0b8a87dd450a6c3368571bfedb3fa"} Jan 22 16:08:58 crc kubenswrapper[4825]: I0122 16:08:58.973202 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerStarted","Data":"9aa28e43cfd8d7ef58f14413e8fdae09b1da86a9f20f01c4f12f23f339924566"} Jan 22 16:08:59 crc kubenswrapper[4825]: I0122 16:08:59.008549 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lr5ws" podStartSLOduration=3.120674598 podStartE2EDuration="10.008516017s" podCreationTimestamp="2026-01-22 16:08:49 +0000 UTC" firstStartedPulling="2026-01-22 16:08:50.881171045 +0000 UTC m=+2677.642697965" lastFinishedPulling="2026-01-22 16:08:57.769012474 +0000 UTC m=+2684.530539384" observedRunningTime="2026-01-22 16:08:58.999996964 +0000 UTC m=+2685.761523874" watchObservedRunningTime="2026-01-22 16:08:59.008516017 +0000 UTC m=+2685.770042937" Jan 22 16:08:59 crc kubenswrapper[4825]: I0122 16:08:59.463930 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:08:59 crc kubenswrapper[4825]: I0122 16:08:59.464285 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:09:00 crc kubenswrapper[4825]: I0122 16:09:00.515427 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lr5ws" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="registry-server" probeResult="failure" output=< Jan 22 16:09:00 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 16:09:00 crc kubenswrapper[4825]: > Jan 22 16:09:04 crc kubenswrapper[4825]: I0122 16:09:04.957560 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xclpn"] Jan 22 16:09:04 crc kubenswrapper[4825]: I0122 16:09:04.960720 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:04 crc kubenswrapper[4825]: I0122 16:09:04.988399 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xclpn"] Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.156147 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twsf\" (UniqueName: \"kubernetes.io/projected/d0877258-eaee-4344-ba3e-1006ffbd350c-kube-api-access-8twsf\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.156794 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-catalog-content\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.156938 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-utilities\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.259195 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twsf\" (UniqueName: \"kubernetes.io/projected/d0877258-eaee-4344-ba3e-1006ffbd350c-kube-api-access-8twsf\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.259250 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-catalog-content\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.259274 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-utilities\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.259929 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-utilities\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.260155 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-catalog-content\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.278310 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twsf\" (UniqueName: \"kubernetes.io/projected/d0877258-eaee-4344-ba3e-1006ffbd350c-kube-api-access-8twsf\") pod \"redhat-operators-xclpn\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:05 crc kubenswrapper[4825]: I0122 16:09:05.290196 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:06 crc kubenswrapper[4825]: I0122 16:09:05.858552 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xclpn"] Jan 22 16:09:06 crc kubenswrapper[4825]: I0122 16:09:06.161596 4825 generic.go:334] "Generic (PLEG): container finished" podID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerID="1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571" exitCode=0 Jan 22 16:09:06 crc kubenswrapper[4825]: I0122 16:09:06.161812 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerDied","Data":"1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571"} Jan 22 16:09:06 crc kubenswrapper[4825]: I0122 16:09:06.161926 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerStarted","Data":"54861d8244e8e5f3533be8f0339d3c2c7fe7f4ca89e1a2e31d60a70cc0dbc81a"} Jan 22 16:09:08 crc kubenswrapper[4825]: I0122 16:09:08.182602 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerStarted","Data":"19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a"} Jan 22 16:09:09 crc kubenswrapper[4825]: I0122 16:09:09.458688 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:09:09 crc kubenswrapper[4825]: I0122 16:09:09.528475 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:09:10 crc kubenswrapper[4825]: I0122 16:09:10.145502 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:09:10 crc kubenswrapper[4825]: I0122 16:09:10.324298 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 16:09:10 crc kubenswrapper[4825]: I0122 16:09:10.324607 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jjktq" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="registry-server" containerID="cri-o://1f93e42b2f89d4469144392eb85732191db5686b95c0857ff7d51a50ece9a020" gracePeriod=2 Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.238609 4825 generic.go:334] "Generic (PLEG): container finished" podID="697daa05-e987-4bf2-a924-df734a327432" containerID="1f93e42b2f89d4469144392eb85732191db5686b95c0857ff7d51a50ece9a020" exitCode=0 Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.238722 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjktq" event={"ID":"697daa05-e987-4bf2-a924-df734a327432","Type":"ContainerDied","Data":"1f93e42b2f89d4469144392eb85732191db5686b95c0857ff7d51a50ece9a020"} Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.639110 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.671547 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-catalog-content\") pod \"697daa05-e987-4bf2-a924-df734a327432\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.671631 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzkcg\" (UniqueName: \"kubernetes.io/projected/697daa05-e987-4bf2-a924-df734a327432-kube-api-access-fzkcg\") pod \"697daa05-e987-4bf2-a924-df734a327432\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.671766 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-utilities\") pod \"697daa05-e987-4bf2-a924-df734a327432\" (UID: \"697daa05-e987-4bf2-a924-df734a327432\") " Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.672470 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-utilities" (OuterVolumeSpecName: "utilities") pod "697daa05-e987-4bf2-a924-df734a327432" (UID: "697daa05-e987-4bf2-a924-df734a327432"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.678725 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697daa05-e987-4bf2-a924-df734a327432-kube-api-access-fzkcg" (OuterVolumeSpecName: "kube-api-access-fzkcg") pod "697daa05-e987-4bf2-a924-df734a327432" (UID: "697daa05-e987-4bf2-a924-df734a327432"). InnerVolumeSpecName "kube-api-access-fzkcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.753421 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "697daa05-e987-4bf2-a924-df734a327432" (UID: "697daa05-e987-4bf2-a924-df734a327432"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.775130 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.775465 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/697daa05-e987-4bf2-a924-df734a327432-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:09:11 crc kubenswrapper[4825]: I0122 16:09:11.775477 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzkcg\" (UniqueName: \"kubernetes.io/projected/697daa05-e987-4bf2-a924-df734a327432-kube-api-access-fzkcg\") on node \"crc\" DevicePath \"\"" Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.250839 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjktq" event={"ID":"697daa05-e987-4bf2-a924-df734a327432","Type":"ContainerDied","Data":"4764f6f36e1cb016f75fe471c0682ac831b3893c6603261fc69d10718c8c4a34"} Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.250881 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjktq" Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.250902 4825 scope.go:117] "RemoveContainer" containerID="1f93e42b2f89d4469144392eb85732191db5686b95c0857ff7d51a50ece9a020" Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.252966 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerDied","Data":"19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a"} Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.253799 4825 generic.go:334] "Generic (PLEG): container finished" podID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerID="19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a" exitCode=0 Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.294783 4825 scope.go:117] "RemoveContainer" containerID="026cc17cbf80bd9c769a6601d20723034d43e91c3b10d035cecec41cd2979445" Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.317259 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.330090 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jjktq"] Jan 22 16:09:12 crc kubenswrapper[4825]: I0122 16:09:12.335366 4825 scope.go:117] "RemoveContainer" containerID="2dcdcc821f68a530550bcb0f8cbc8c44c632da717e781ca17af3bd043a17c8ee" Jan 22 16:09:12 crc kubenswrapper[4825]: E0122 16:09:12.427845 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697daa05_e987_4bf2_a924_df734a327432.slice\": RecentStats: unable to find data in memory cache]" Jan 22 16:09:13 crc kubenswrapper[4825]: I0122 16:09:13.267300 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerStarted","Data":"788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3"} Jan 22 16:09:13 crc kubenswrapper[4825]: I0122 16:09:13.290376 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xclpn" podStartSLOduration=2.763276936 podStartE2EDuration="9.290359159s" podCreationTimestamp="2026-01-22 16:09:04 +0000 UTC" firstStartedPulling="2026-01-22 16:09:06.165314151 +0000 UTC m=+2692.926841061" lastFinishedPulling="2026-01-22 16:09:12.692396364 +0000 UTC m=+2699.453923284" observedRunningTime="2026-01-22 16:09:13.285307045 +0000 UTC m=+2700.046833955" watchObservedRunningTime="2026-01-22 16:09:13.290359159 +0000 UTC m=+2700.051886069" Jan 22 16:09:13 crc kubenswrapper[4825]: I0122 16:09:13.552281 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697daa05-e987-4bf2-a924-df734a327432" path="/var/lib/kubelet/pods/697daa05-e987-4bf2-a924-df734a327432/volumes" Jan 22 16:09:15 crc kubenswrapper[4825]: I0122 16:09:15.290342 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:15 crc kubenswrapper[4825]: I0122 16:09:15.290596 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:16 crc kubenswrapper[4825]: I0122 16:09:16.339446 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xclpn" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="registry-server" probeResult="failure" output=< Jan 22 16:09:16 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 16:09:16 crc kubenswrapper[4825]: > Jan 22 16:09:25 crc kubenswrapper[4825]: I0122 16:09:25.339318 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:25 crc kubenswrapper[4825]: I0122 16:09:25.396255 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:25 crc kubenswrapper[4825]: I0122 16:09:25.582279 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xclpn"] Jan 22 16:09:26 crc kubenswrapper[4825]: I0122 16:09:26.411044 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xclpn" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="registry-server" containerID="cri-o://788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3" gracePeriod=2 Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.331864 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.429077 4825 generic.go:334] "Generic (PLEG): container finished" podID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerID="788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3" exitCode=0 Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.429118 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerDied","Data":"788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3"} Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.429147 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xclpn" event={"ID":"d0877258-eaee-4344-ba3e-1006ffbd350c","Type":"ContainerDied","Data":"54861d8244e8e5f3533be8f0339d3c2c7fe7f4ca89e1a2e31d60a70cc0dbc81a"} Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.429165 4825 scope.go:117] "RemoveContainer" containerID="788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.429307 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xclpn" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.463709 4825 scope.go:117] "RemoveContainer" containerID="19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.488031 4825 scope.go:117] "RemoveContainer" containerID="1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.508658 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-catalog-content\") pod \"d0877258-eaee-4344-ba3e-1006ffbd350c\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.508888 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-utilities\") pod \"d0877258-eaee-4344-ba3e-1006ffbd350c\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.509008 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8twsf\" (UniqueName: \"kubernetes.io/projected/d0877258-eaee-4344-ba3e-1006ffbd350c-kube-api-access-8twsf\") pod \"d0877258-eaee-4344-ba3e-1006ffbd350c\" (UID: \"d0877258-eaee-4344-ba3e-1006ffbd350c\") " Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.509763 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-utilities" (OuterVolumeSpecName: "utilities") pod "d0877258-eaee-4344-ba3e-1006ffbd350c" (UID: "d0877258-eaee-4344-ba3e-1006ffbd350c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.514810 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0877258-eaee-4344-ba3e-1006ffbd350c-kube-api-access-8twsf" (OuterVolumeSpecName: "kube-api-access-8twsf") pod "d0877258-eaee-4344-ba3e-1006ffbd350c" (UID: "d0877258-eaee-4344-ba3e-1006ffbd350c"). InnerVolumeSpecName "kube-api-access-8twsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.580390 4825 scope.go:117] "RemoveContainer" containerID="788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3" Jan 22 16:09:27 crc kubenswrapper[4825]: E0122 16:09:27.580857 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3\": container with ID starting with 788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3 not found: ID does not exist" containerID="788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.580898 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3"} err="failed to get container status \"788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3\": rpc error: code = NotFound desc = could not find container \"788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3\": container with ID starting with 788a478e8b7588608a386c7c6f19345b87eacdf716278a11c43d95000dafa4e3 not found: ID does not exist" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.580925 4825 scope.go:117] "RemoveContainer" containerID="19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a" Jan 22 16:09:27 crc kubenswrapper[4825]: E0122 16:09:27.581280 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a\": container with ID starting with 19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a not found: ID does not exist" containerID="19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.581312 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a"} err="failed to get container status \"19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a\": rpc error: code = NotFound desc = could not find container \"19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a\": container with ID starting with 19ed4b5bb9aad46c56d5c1a38690c6c3d20f1309dec4154cd523645971a3c99a not found: ID does not exist" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.581333 4825 scope.go:117] "RemoveContainer" containerID="1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571" Jan 22 16:09:27 crc kubenswrapper[4825]: E0122 16:09:27.581570 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571\": container with ID starting with 1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571 not found: ID does not exist" containerID="1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.581599 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571"} err="failed to get container status \"1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571\": rpc error: code = NotFound desc = could not find container \"1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571\": container with ID starting with 1fa594d5019f90d85c9497024041ead482ea722ffe179198717dee7d077f3571 not found: ID does not exist" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.612037 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.612070 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8twsf\" (UniqueName: \"kubernetes.io/projected/d0877258-eaee-4344-ba3e-1006ffbd350c-kube-api-access-8twsf\") on node \"crc\" DevicePath \"\"" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.676670 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0877258-eaee-4344-ba3e-1006ffbd350c" (UID: "d0877258-eaee-4344-ba3e-1006ffbd350c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.714074 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0877258-eaee-4344-ba3e-1006ffbd350c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.770387 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xclpn"] Jan 22 16:09:27 crc kubenswrapper[4825]: I0122 16:09:27.780618 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xclpn"] Jan 22 16:09:29 crc kubenswrapper[4825]: I0122 16:09:29.544324 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" path="/var/lib/kubelet/pods/d0877258-eaee-4344-ba3e-1006ffbd350c/volumes" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.629746 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5r57"] Jan 22 16:10:29 crc kubenswrapper[4825]: E0122 16:10:29.630859 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="registry-server" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.630875 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="registry-server" Jan 22 16:10:29 crc kubenswrapper[4825]: E0122 16:10:29.630894 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="extract-content" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.630902 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="extract-content" Jan 22 16:10:29 crc kubenswrapper[4825]: E0122 16:10:29.630920 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="extract-utilities" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.630929 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="extract-utilities" Jan 22 16:10:29 crc kubenswrapper[4825]: E0122 16:10:29.630955 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="extract-utilities" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.630962 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="extract-utilities" Jan 22 16:10:29 crc kubenswrapper[4825]: E0122 16:10:29.631007 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="registry-server" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.631015 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="registry-server" Jan 22 16:10:29 crc kubenswrapper[4825]: E0122 16:10:29.631047 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="extract-content" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.631056 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="extract-content" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.631320 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0877258-eaee-4344-ba3e-1006ffbd350c" containerName="registry-server" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.631351 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="697daa05-e987-4bf2-a924-df734a327432" containerName="registry-server" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.633320 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.645712 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5r57"] Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.746292 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-catalog-content\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.746846 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-utilities\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.746872 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqm5c\" (UniqueName: \"kubernetes.io/projected/cacef601-a4a8-4660-9747-2c68a1783fd2-kube-api-access-dqm5c\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.849261 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-catalog-content\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.849457 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-utilities\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.849477 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqm5c\" (UniqueName: \"kubernetes.io/projected/cacef601-a4a8-4660-9747-2c68a1783fd2-kube-api-access-dqm5c\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.849945 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-catalog-content\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.850025 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-utilities\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.872236 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqm5c\" (UniqueName: \"kubernetes.io/projected/cacef601-a4a8-4660-9747-2c68a1783fd2-kube-api-access-dqm5c\") pod \"community-operators-n5r57\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:29 crc kubenswrapper[4825]: I0122 16:10:29.951773 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:30 crc kubenswrapper[4825]: I0122 16:10:30.496532 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5r57"] Jan 22 16:10:30 crc kubenswrapper[4825]: I0122 16:10:30.577229 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerStarted","Data":"85c7f111079fec53068cd7806065815f6eb0ece0893c6b6dbef19b9b948b8343"} Jan 22 16:10:31 crc kubenswrapper[4825]: I0122 16:10:31.589442 4825 generic.go:334] "Generic (PLEG): container finished" podID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerID="a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b" exitCode=0 Jan 22 16:10:31 crc kubenswrapper[4825]: I0122 16:10:31.589563 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerDied","Data":"a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b"} Jan 22 16:10:32 crc kubenswrapper[4825]: I0122 16:10:32.612883 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerStarted","Data":"d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38"} Jan 22 16:10:33 crc kubenswrapper[4825]: I0122 16:10:33.624020 4825 generic.go:334] "Generic (PLEG): container finished" podID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerID="d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38" exitCode=0 Jan 22 16:10:33 crc kubenswrapper[4825]: I0122 16:10:33.624082 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerDied","Data":"d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38"} Jan 22 16:10:34 crc kubenswrapper[4825]: I0122 16:10:34.641843 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerStarted","Data":"01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4"} Jan 22 16:10:34 crc kubenswrapper[4825]: I0122 16:10:34.680137 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5r57" podStartSLOduration=3.247470499 podStartE2EDuration="5.680114282s" podCreationTimestamp="2026-01-22 16:10:29 +0000 UTC" firstStartedPulling="2026-01-22 16:10:31.59332415 +0000 UTC m=+2778.354851070" lastFinishedPulling="2026-01-22 16:10:34.025967943 +0000 UTC m=+2780.787494853" observedRunningTime="2026-01-22 16:10:34.666230095 +0000 UTC m=+2781.427757005" watchObservedRunningTime="2026-01-22 16:10:34.680114282 +0000 UTC m=+2781.441641212" Jan 22 16:10:39 crc kubenswrapper[4825]: I0122 16:10:39.961111 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:39 crc kubenswrapper[4825]: I0122 16:10:39.962941 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:40 crc kubenswrapper[4825]: I0122 16:10:40.009589 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:40 crc kubenswrapper[4825]: I0122 16:10:40.753734 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:40 crc kubenswrapper[4825]: I0122 16:10:40.848880 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5r57"] Jan 22 16:10:42 crc kubenswrapper[4825]: I0122 16:10:42.727683 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5r57" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="registry-server" containerID="cri-o://01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4" gracePeriod=2 Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.726248 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.739827 4825 generic.go:334] "Generic (PLEG): container finished" podID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerID="01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4" exitCode=0 Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.739877 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerDied","Data":"01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4"} Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.739909 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5r57" event={"ID":"cacef601-a4a8-4660-9747-2c68a1783fd2","Type":"ContainerDied","Data":"85c7f111079fec53068cd7806065815f6eb0ece0893c6b6dbef19b9b948b8343"} Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.739929 4825 scope.go:117] "RemoveContainer" containerID="01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.740113 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5r57" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.766595 4825 scope.go:117] "RemoveContainer" containerID="d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.788295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-catalog-content\") pod \"cacef601-a4a8-4660-9747-2c68a1783fd2\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.788355 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-utilities\") pod \"cacef601-a4a8-4660-9747-2c68a1783fd2\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.788486 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqm5c\" (UniqueName: \"kubernetes.io/projected/cacef601-a4a8-4660-9747-2c68a1783fd2-kube-api-access-dqm5c\") pod \"cacef601-a4a8-4660-9747-2c68a1783fd2\" (UID: \"cacef601-a4a8-4660-9747-2c68a1783fd2\") " Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.789367 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-utilities" (OuterVolumeSpecName: "utilities") pod "cacef601-a4a8-4660-9747-2c68a1783fd2" (UID: "cacef601-a4a8-4660-9747-2c68a1783fd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.789864 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.795941 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacef601-a4a8-4660-9747-2c68a1783fd2-kube-api-access-dqm5c" (OuterVolumeSpecName: "kube-api-access-dqm5c") pod "cacef601-a4a8-4660-9747-2c68a1783fd2" (UID: "cacef601-a4a8-4660-9747-2c68a1783fd2"). InnerVolumeSpecName "kube-api-access-dqm5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.801621 4825 scope.go:117] "RemoveContainer" containerID="a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.845555 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cacef601-a4a8-4660-9747-2c68a1783fd2" (UID: "cacef601-a4a8-4660-9747-2c68a1783fd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.892189 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqm5c\" (UniqueName: \"kubernetes.io/projected/cacef601-a4a8-4660-9747-2c68a1783fd2-kube-api-access-dqm5c\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.892240 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cacef601-a4a8-4660-9747-2c68a1783fd2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.896116 4825 scope.go:117] "RemoveContainer" containerID="01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4" Jan 22 16:10:43 crc kubenswrapper[4825]: E0122 16:10:43.896831 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4\": container with ID starting with 01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4 not found: ID does not exist" containerID="01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.896899 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4"} err="failed to get container status \"01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4\": rpc error: code = NotFound desc = could not find container \"01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4\": container with ID starting with 01db75cff4db257f89126aac1e9bd9784ccfb393f26dc7117177abc29bb3bdc4 not found: ID does not exist" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.896949 4825 scope.go:117] "RemoveContainer" containerID="d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38" Jan 22 16:10:43 crc kubenswrapper[4825]: E0122 16:10:43.897281 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38\": container with ID starting with d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38 not found: ID does not exist" containerID="d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.897319 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38"} err="failed to get container status \"d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38\": rpc error: code = NotFound desc = could not find container \"d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38\": container with ID starting with d67290aae77e7c56698175673ccb34a34d7e60a0b6c482d047f9f3ead2b79b38 not found: ID does not exist" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.897337 4825 scope.go:117] "RemoveContainer" containerID="a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b" Jan 22 16:10:43 crc kubenswrapper[4825]: E0122 16:10:43.897730 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b\": container with ID starting with a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b not found: ID does not exist" containerID="a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b" Jan 22 16:10:43 crc kubenswrapper[4825]: I0122 16:10:43.897778 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b"} err="failed to get container status \"a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b\": rpc error: code = NotFound desc = could not find container \"a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b\": container with ID starting with a7432a1768e30ce31415c8c6ebdb1f5e6debc140e1bf9f214de211262346c09b not found: ID does not exist" Jan 22 16:10:44 crc kubenswrapper[4825]: I0122 16:10:44.077799 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5r57"] Jan 22 16:10:44 crc kubenswrapper[4825]: I0122 16:10:44.089602 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5r57"] Jan 22 16:10:45 crc kubenswrapper[4825]: I0122 16:10:45.529147 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" path="/var/lib/kubelet/pods/cacef601-a4a8-4660-9747-2c68a1783fd2/volumes" Jan 22 16:10:56 crc kubenswrapper[4825]: I0122 16:10:56.894035 4825 generic.go:334] "Generic (PLEG): container finished" podID="e461766f-09e2-4b85-87e7-9e5048f701cd" containerID="f93e81bfeb286704e5a8c25955e5e6a297b4150ff55f70e5944707978327db28" exitCode=0 Jan 22 16:10:56 crc kubenswrapper[4825]: I0122 16:10:56.894129 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" event={"ID":"e461766f-09e2-4b85-87e7-9e5048f701cd","Type":"ContainerDied","Data":"f93e81bfeb286704e5a8c25955e5e6a297b4150ff55f70e5944707978327db28"} Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.575132 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.663907 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-ssh-key-openstack-edpm-ipam\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.663957 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-combined-ca-bundle\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664002 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-0\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664142 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n4wl\" (UniqueName: \"kubernetes.io/projected/e461766f-09e2-4b85-87e7-9e5048f701cd-kube-api-access-4n4wl\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664195 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-extra-config-0\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664241 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-1\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664270 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-0\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664297 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-inventory\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.664350 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-1\") pod \"e461766f-09e2-4b85-87e7-9e5048f701cd\" (UID: \"e461766f-09e2-4b85-87e7-9e5048f701cd\") " Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.670061 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e461766f-09e2-4b85-87e7-9e5048f701cd-kube-api-access-4n4wl" (OuterVolumeSpecName: "kube-api-access-4n4wl") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "kube-api-access-4n4wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.670736 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.706686 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-inventory" (OuterVolumeSpecName: "inventory") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.710170 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.715549 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.717348 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.718310 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.725105 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.735446 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e461766f-09e2-4b85-87e7-9e5048f701cd" (UID: "e461766f-09e2-4b85-87e7-9e5048f701cd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766282 4825 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766322 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766334 4825 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766347 4825 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766357 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n4wl\" (UniqueName: \"kubernetes.io/projected/e461766f-09e2-4b85-87e7-9e5048f701cd-kube-api-access-4n4wl\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766370 4825 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766383 4825 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766394 4825 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:58 crc kubenswrapper[4825]: I0122 16:10:58.766405 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e461766f-09e2-4b85-87e7-9e5048f701cd-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.078514 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" event={"ID":"e461766f-09e2-4b85-87e7-9e5048f701cd","Type":"ContainerDied","Data":"b463d2b9b8e763a527c0c273e5511e3af11a5a33e556a73491ef7d842b7aefa8"} Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.078557 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b463d2b9b8e763a527c0c273e5511e3af11a5a33e556a73491ef7d842b7aefa8" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.078636 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nc2g4" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.174617 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh"] Jan 22 16:10:59 crc kubenswrapper[4825]: E0122 16:10:59.175489 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="registry-server" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.175514 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="registry-server" Jan 22 16:10:59 crc kubenswrapper[4825]: E0122 16:10:59.175545 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="extract-content" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.175554 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="extract-content" Jan 22 16:10:59 crc kubenswrapper[4825]: E0122 16:10:59.175582 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="extract-utilities" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.175590 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="extract-utilities" Jan 22 16:10:59 crc kubenswrapper[4825]: E0122 16:10:59.175603 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e461766f-09e2-4b85-87e7-9e5048f701cd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.175612 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e461766f-09e2-4b85-87e7-9e5048f701cd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.175821 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacef601-a4a8-4660-9747-2c68a1783fd2" containerName="registry-server" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.175876 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e461766f-09e2-4b85-87e7-9e5048f701cd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.176816 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.180317 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.180607 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql4gv" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.180733 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.180895 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.181075 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.202691 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh"] Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.278402 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.278547 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.278588 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfl85\" (UniqueName: \"kubernetes.io/projected/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-kube-api-access-nfl85\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.278860 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.279011 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.279112 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.279150 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380127 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380194 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380255 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380281 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfl85\" (UniqueName: \"kubernetes.io/projected/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-kube-api-access-nfl85\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380395 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380427 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.380556 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.392915 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.498089 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfl85\" (UniqueName: \"kubernetes.io/projected/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-kube-api-access-nfl85\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.537674 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.555256 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.556308 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.556888 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.559263 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:10:59 crc kubenswrapper[4825]: I0122 16:10:59.807086 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:11:00 crc kubenswrapper[4825]: I0122 16:11:00.391761 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh"] Jan 22 16:11:01 crc kubenswrapper[4825]: I0122 16:11:01.110125 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" event={"ID":"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe","Type":"ContainerStarted","Data":"67839d965b0599cabc8c0e779cde15ad18ee91e72e4e2f0a158450736becc9e4"} Jan 22 16:11:02 crc kubenswrapper[4825]: I0122 16:11:02.124358 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" event={"ID":"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe","Type":"ContainerStarted","Data":"ecb49f9b3b5d3568c64b2632ca9d0c624585adb95bdfdc6b83f5011a14ff35d4"} Jan 22 16:11:02 crc kubenswrapper[4825]: I0122 16:11:02.143947 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" podStartSLOduration=2.592693032 podStartE2EDuration="3.143930777s" podCreationTimestamp="2026-01-22 16:10:59 +0000 UTC" firstStartedPulling="2026-01-22 16:11:00.386418411 +0000 UTC m=+2807.147945331" lastFinishedPulling="2026-01-22 16:11:00.937656176 +0000 UTC m=+2807.699183076" observedRunningTime="2026-01-22 16:11:02.140744096 +0000 UTC m=+2808.902271006" watchObservedRunningTime="2026-01-22 16:11:02.143930777 +0000 UTC m=+2808.905457687" Jan 22 16:11:05 crc kubenswrapper[4825]: I0122 16:11:05.542297 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:11:05 crc kubenswrapper[4825]: I0122 16:11:05.542837 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:11:35 crc kubenswrapper[4825]: I0122 16:11:35.542125 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:11:35 crc kubenswrapper[4825]: I0122 16:11:35.542806 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:11:59 crc kubenswrapper[4825]: I0122 16:11:59.790860 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pg6sb"] Jan 22 16:11:59 crc kubenswrapper[4825]: I0122 16:11:59.793720 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:11:59 crc kubenswrapper[4825]: I0122 16:11:59.804971 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg6sb"] Jan 22 16:11:59 crc kubenswrapper[4825]: I0122 16:11:59.897434 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6rv\" (UniqueName: \"kubernetes.io/projected/97270eb3-fd1d-4f22-965d-6beb85f9a434-kube-api-access-mq6rv\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:11:59 crc kubenswrapper[4825]: I0122 16:11:59.898313 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-utilities\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:11:59 crc kubenswrapper[4825]: I0122 16:11:59.898425 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-catalog-content\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.000735 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-catalog-content\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.001104 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6rv\" (UniqueName: \"kubernetes.io/projected/97270eb3-fd1d-4f22-965d-6beb85f9a434-kube-api-access-mq6rv\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.001380 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-utilities\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.001908 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-catalog-content\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.005235 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-utilities\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.028484 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6rv\" (UniqueName: \"kubernetes.io/projected/97270eb3-fd1d-4f22-965d-6beb85f9a434-kube-api-access-mq6rv\") pod \"redhat-marketplace-pg6sb\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.126916 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.865385 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg6sb"] Jan 22 16:12:00 crc kubenswrapper[4825]: W0122 16:12:00.879219 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97270eb3_fd1d_4f22_965d_6beb85f9a434.slice/crio-7e20c1b633b670b543e0baca294ad51881a25ad68293eb0003bf8d3ec94f2d49 WatchSource:0}: Error finding container 7e20c1b633b670b543e0baca294ad51881a25ad68293eb0003bf8d3ec94f2d49: Status 404 returned error can't find the container with id 7e20c1b633b670b543e0baca294ad51881a25ad68293eb0003bf8d3ec94f2d49 Jan 22 16:12:00 crc kubenswrapper[4825]: I0122 16:12:00.898957 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerStarted","Data":"7e20c1b633b670b543e0baca294ad51881a25ad68293eb0003bf8d3ec94f2d49"} Jan 22 16:12:01 crc kubenswrapper[4825]: I0122 16:12:01.909613 4825 generic.go:334] "Generic (PLEG): container finished" podID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerID="2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f" exitCode=0 Jan 22 16:12:01 crc kubenswrapper[4825]: I0122 16:12:01.909664 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerDied","Data":"2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f"} Jan 22 16:12:03 crc kubenswrapper[4825]: I0122 16:12:03.008646 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerStarted","Data":"ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c"} Jan 22 16:12:04 crc kubenswrapper[4825]: I0122 16:12:04.027833 4825 generic.go:334] "Generic (PLEG): container finished" podID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerID="ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c" exitCode=0 Jan 22 16:12:04 crc kubenswrapper[4825]: I0122 16:12:04.027904 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerDied","Data":"ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c"} Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.040075 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerStarted","Data":"9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad"} Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.075991 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pg6sb" podStartSLOduration=3.329929123 podStartE2EDuration="6.075951691s" podCreationTimestamp="2026-01-22 16:11:59 +0000 UTC" firstStartedPulling="2026-01-22 16:12:01.911154642 +0000 UTC m=+2868.672681552" lastFinishedPulling="2026-01-22 16:12:04.65717721 +0000 UTC m=+2871.418704120" observedRunningTime="2026-01-22 16:12:05.069184108 +0000 UTC m=+2871.830711028" watchObservedRunningTime="2026-01-22 16:12:05.075951691 +0000 UTC m=+2871.837478601" Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.592688 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.592736 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.600479 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.601466 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c3066f9e0f387705d530f10b82fa5e48f74d5b1b2427dd55665164230f71184"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:12:05 crc kubenswrapper[4825]: I0122 16:12:05.601522 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://2c3066f9e0f387705d530f10b82fa5e48f74d5b1b2427dd55665164230f71184" gracePeriod=600 Jan 22 16:12:06 crc kubenswrapper[4825]: I0122 16:12:06.066478 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="2c3066f9e0f387705d530f10b82fa5e48f74d5b1b2427dd55665164230f71184" exitCode=0 Jan 22 16:12:06 crc kubenswrapper[4825]: I0122 16:12:06.066525 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"2c3066f9e0f387705d530f10b82fa5e48f74d5b1b2427dd55665164230f71184"} Jan 22 16:12:06 crc kubenswrapper[4825]: I0122 16:12:06.067642 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80"} Jan 22 16:12:06 crc kubenswrapper[4825]: I0122 16:12:06.067680 4825 scope.go:117] "RemoveContainer" containerID="0ff72a25401e2d83aa81f4c7afebc61e23c849e2db794f1b87ec17cd6d8c39ec" Jan 22 16:12:10 crc kubenswrapper[4825]: I0122 16:12:10.128334 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:10 crc kubenswrapper[4825]: I0122 16:12:10.128695 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:10 crc kubenswrapper[4825]: I0122 16:12:10.200663 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:11 crc kubenswrapper[4825]: I0122 16:12:11.216728 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:11 crc kubenswrapper[4825]: I0122 16:12:11.294316 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg6sb"] Jan 22 16:12:13 crc kubenswrapper[4825]: I0122 16:12:13.158342 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pg6sb" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="registry-server" containerID="cri-o://9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad" gracePeriod=2 Jan 22 16:12:13 crc kubenswrapper[4825]: I0122 16:12:13.983684 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.073174 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-utilities\") pod \"97270eb3-fd1d-4f22-965d-6beb85f9a434\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.073411 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-catalog-content\") pod \"97270eb3-fd1d-4f22-965d-6beb85f9a434\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.073507 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq6rv\" (UniqueName: \"kubernetes.io/projected/97270eb3-fd1d-4f22-965d-6beb85f9a434-kube-api-access-mq6rv\") pod \"97270eb3-fd1d-4f22-965d-6beb85f9a434\" (UID: \"97270eb3-fd1d-4f22-965d-6beb85f9a434\") " Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.081621 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-utilities" (OuterVolumeSpecName: "utilities") pod "97270eb3-fd1d-4f22-965d-6beb85f9a434" (UID: "97270eb3-fd1d-4f22-965d-6beb85f9a434"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.090338 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97270eb3-fd1d-4f22-965d-6beb85f9a434-kube-api-access-mq6rv" (OuterVolumeSpecName: "kube-api-access-mq6rv") pod "97270eb3-fd1d-4f22-965d-6beb85f9a434" (UID: "97270eb3-fd1d-4f22-965d-6beb85f9a434"). InnerVolumeSpecName "kube-api-access-mq6rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.143250 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97270eb3-fd1d-4f22-965d-6beb85f9a434" (UID: "97270eb3-fd1d-4f22-965d-6beb85f9a434"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.178664 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.178696 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97270eb3-fd1d-4f22-965d-6beb85f9a434-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.178708 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq6rv\" (UniqueName: \"kubernetes.io/projected/97270eb3-fd1d-4f22-965d-6beb85f9a434-kube-api-access-mq6rv\") on node \"crc\" DevicePath \"\"" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.230813 4825 generic.go:334] "Generic (PLEG): container finished" podID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerID="9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad" exitCode=0 Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.230864 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerDied","Data":"9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad"} Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.230898 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg6sb" event={"ID":"97270eb3-fd1d-4f22-965d-6beb85f9a434","Type":"ContainerDied","Data":"7e20c1b633b670b543e0baca294ad51881a25ad68293eb0003bf8d3ec94f2d49"} Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.230947 4825 scope.go:117] "RemoveContainer" containerID="9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.231152 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg6sb" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.269715 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg6sb"] Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.270629 4825 scope.go:117] "RemoveContainer" containerID="ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.280062 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg6sb"] Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.291447 4825 scope.go:117] "RemoveContainer" containerID="2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.348439 4825 scope.go:117] "RemoveContainer" containerID="9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad" Jan 22 16:12:14 crc kubenswrapper[4825]: E0122 16:12:14.348885 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad\": container with ID starting with 9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad not found: ID does not exist" containerID="9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.348920 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad"} err="failed to get container status \"9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad\": rpc error: code = NotFound desc = could not find container \"9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad\": container with ID starting with 9a4c727f9e4e06bb9d11a28430d6048f3d6c30057b4b4bf798cd1427f9433cad not found: ID does not exist" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.348940 4825 scope.go:117] "RemoveContainer" containerID="ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c" Jan 22 16:12:14 crc kubenswrapper[4825]: E0122 16:12:14.349380 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c\": container with ID starting with ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c not found: ID does not exist" containerID="ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.349413 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c"} err="failed to get container status \"ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c\": rpc error: code = NotFound desc = could not find container \"ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c\": container with ID starting with ce81d634cc1839cfda8ac9ba56976ee8a7b4ed0431c0fa284ce6dc055cc8606c not found: ID does not exist" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.349425 4825 scope.go:117] "RemoveContainer" containerID="2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f" Jan 22 16:12:14 crc kubenswrapper[4825]: E0122 16:12:14.349628 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f\": container with ID starting with 2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f not found: ID does not exist" containerID="2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f" Jan 22 16:12:14 crc kubenswrapper[4825]: I0122 16:12:14.349651 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f"} err="failed to get container status \"2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f\": rpc error: code = NotFound desc = could not find container \"2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f\": container with ID starting with 2c19f14dcc23c9156f911597e5145e3ce323ba7ab6a160c7342a7e85fd542d5f not found: ID does not exist" Jan 22 16:12:15 crc kubenswrapper[4825]: I0122 16:12:15.530356 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" path="/var/lib/kubelet/pods/97270eb3-fd1d-4f22-965d-6beb85f9a434/volumes" Jan 22 16:13:40 crc kubenswrapper[4825]: I0122 16:13:40.697742 4825 generic.go:334] "Generic (PLEG): container finished" podID="803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" containerID="ecb49f9b3b5d3568c64b2632ca9d0c624585adb95bdfdc6b83f5011a14ff35d4" exitCode=0 Jan 22 16:13:40 crc kubenswrapper[4825]: I0122 16:13:40.697821 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" event={"ID":"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe","Type":"ContainerDied","Data":"ecb49f9b3b5d3568c64b2632ca9d0c624585adb95bdfdc6b83f5011a14ff35d4"} Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.223719 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.319724 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ssh-key-openstack-edpm-ipam\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.320156 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-1\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.320293 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-inventory\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.320476 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-telemetry-combined-ca-bundle\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.320686 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-0\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.320835 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfl85\" (UniqueName: \"kubernetes.io/projected/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-kube-api-access-nfl85\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.320939 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-2\") pod \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\" (UID: \"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe\") " Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.326314 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.326962 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-kube-api-access-nfl85" (OuterVolumeSpecName: "kube-api-access-nfl85") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "kube-api-access-nfl85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.352339 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.356281 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-inventory" (OuterVolumeSpecName: "inventory") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.358837 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.360150 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.361415 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" (UID: "803d8c56-ded5-4e3c-bf48-d5eb0b623dfe"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423819 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423860 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfl85\" (UniqueName: \"kubernetes.io/projected/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-kube-api-access-nfl85\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423876 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423889 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423902 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423918 4825 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.423932 4825 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803d8c56-ded5-4e3c-bf48-d5eb0b623dfe-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.720812 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" event={"ID":"803d8c56-ded5-4e3c-bf48-d5eb0b623dfe","Type":"ContainerDied","Data":"67839d965b0599cabc8c0e779cde15ad18ee91e72e4e2f0a158450736becc9e4"} Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.720850 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67839d965b0599cabc8c0e779cde15ad18ee91e72e4e2f0a158450736becc9e4" Jan 22 16:13:42 crc kubenswrapper[4825]: I0122 16:13:42.720953 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh" Jan 22 16:14:05 crc kubenswrapper[4825]: I0122 16:14:05.541900 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:14:05 crc kubenswrapper[4825]: I0122 16:14:05.542497 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:14:35 crc kubenswrapper[4825]: I0122 16:14:35.541469 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:14:35 crc kubenswrapper[4825]: I0122 16:14:35.542132 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.157029 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6"] Jan 22 16:15:00 crc kubenswrapper[4825]: E0122 16:15:00.158356 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="registry-server" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.158384 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="registry-server" Jan 22 16:15:00 crc kubenswrapper[4825]: E0122 16:15:00.158417 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="extract-content" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.158426 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="extract-content" Jan 22 16:15:00 crc kubenswrapper[4825]: E0122 16:15:00.158438 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="extract-utilities" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.158445 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="extract-utilities" Jan 22 16:15:00 crc kubenswrapper[4825]: E0122 16:15:00.158478 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.158488 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.158789 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="97270eb3-fd1d-4f22-965d-6beb85f9a434" containerName="registry-server" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.158829 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="803d8c56-ded5-4e3c-bf48-d5eb0b623dfe" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.159940 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.162312 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.162523 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.182031 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6"] Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.320025 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrbl\" (UniqueName: \"kubernetes.io/projected/f7890406-bc83-4033-9d5c-13027e1791b7-kube-api-access-rnrbl\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.320099 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7890406-bc83-4033-9d5c-13027e1791b7-secret-volume\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.320703 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7890406-bc83-4033-9d5c-13027e1791b7-config-volume\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.422468 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7890406-bc83-4033-9d5c-13027e1791b7-config-volume\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.422626 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrbl\" (UniqueName: \"kubernetes.io/projected/f7890406-bc83-4033-9d5c-13027e1791b7-kube-api-access-rnrbl\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.422659 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7890406-bc83-4033-9d5c-13027e1791b7-secret-volume\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.424500 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7890406-bc83-4033-9d5c-13027e1791b7-config-volume\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.429642 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7890406-bc83-4033-9d5c-13027e1791b7-secret-volume\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.442141 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrbl\" (UniqueName: \"kubernetes.io/projected/f7890406-bc83-4033-9d5c-13027e1791b7-kube-api-access-rnrbl\") pod \"collect-profiles-29484975-f9bg6\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:00 crc kubenswrapper[4825]: I0122 16:15:00.496378 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:01 crc kubenswrapper[4825]: I0122 16:15:01.069780 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6"] Jan 22 16:15:02 crc kubenswrapper[4825]: I0122 16:15:02.045433 4825 generic.go:334] "Generic (PLEG): container finished" podID="f7890406-bc83-4033-9d5c-13027e1791b7" containerID="7bc12c69e44f56e66d062e5a24a69994bfe9d2deb53267022a06d36266a59eea" exitCode=0 Jan 22 16:15:02 crc kubenswrapper[4825]: I0122 16:15:02.045482 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" event={"ID":"f7890406-bc83-4033-9d5c-13027e1791b7","Type":"ContainerDied","Data":"7bc12c69e44f56e66d062e5a24a69994bfe9d2deb53267022a06d36266a59eea"} Jan 22 16:15:02 crc kubenswrapper[4825]: I0122 16:15:02.045514 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" event={"ID":"f7890406-bc83-4033-9d5c-13027e1791b7","Type":"ContainerStarted","Data":"4b213274083c643dff86c9c6dccff21ceda06020929ee22678c6778f04185acd"} Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.549791 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.601230 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrbl\" (UniqueName: \"kubernetes.io/projected/f7890406-bc83-4033-9d5c-13027e1791b7-kube-api-access-rnrbl\") pod \"f7890406-bc83-4033-9d5c-13027e1791b7\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.602691 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7890406-bc83-4033-9d5c-13027e1791b7-secret-volume\") pod \"f7890406-bc83-4033-9d5c-13027e1791b7\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.608810 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7890406-bc83-4033-9d5c-13027e1791b7-kube-api-access-rnrbl" (OuterVolumeSpecName: "kube-api-access-rnrbl") pod "f7890406-bc83-4033-9d5c-13027e1791b7" (UID: "f7890406-bc83-4033-9d5c-13027e1791b7"). InnerVolumeSpecName "kube-api-access-rnrbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.608821 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7890406-bc83-4033-9d5c-13027e1791b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7890406-bc83-4033-9d5c-13027e1791b7" (UID: "f7890406-bc83-4033-9d5c-13027e1791b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.706153 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7890406-bc83-4033-9d5c-13027e1791b7-config-volume\") pod \"f7890406-bc83-4033-9d5c-13027e1791b7\" (UID: \"f7890406-bc83-4033-9d5c-13027e1791b7\") " Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.706680 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrbl\" (UniqueName: \"kubernetes.io/projected/f7890406-bc83-4033-9d5c-13027e1791b7-kube-api-access-rnrbl\") on node \"crc\" DevicePath \"\"" Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.706701 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7890406-bc83-4033-9d5c-13027e1791b7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.707373 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7890406-bc83-4033-9d5c-13027e1791b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7890406-bc83-4033-9d5c-13027e1791b7" (UID: "f7890406-bc83-4033-9d5c-13027e1791b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:15:03 crc kubenswrapper[4825]: I0122 16:15:03.810864 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7890406-bc83-4033-9d5c-13027e1791b7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:15:04 crc kubenswrapper[4825]: I0122 16:15:04.067682 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" event={"ID":"f7890406-bc83-4033-9d5c-13027e1791b7","Type":"ContainerDied","Data":"4b213274083c643dff86c9c6dccff21ceda06020929ee22678c6778f04185acd"} Jan 22 16:15:04 crc kubenswrapper[4825]: I0122 16:15:04.067720 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b213274083c643dff86c9c6dccff21ceda06020929ee22678c6778f04185acd" Jan 22 16:15:04 crc kubenswrapper[4825]: I0122 16:15:04.067749 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484975-f9bg6" Jan 22 16:15:04 crc kubenswrapper[4825]: I0122 16:15:04.655060 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n"] Jan 22 16:15:04 crc kubenswrapper[4825]: I0122 16:15:04.666011 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484930-xdl2n"] Jan 22 16:15:05 crc kubenswrapper[4825]: I0122 16:15:05.542236 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:15:05 crc kubenswrapper[4825]: I0122 16:15:05.542363 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:15:05 crc kubenswrapper[4825]: I0122 16:15:05.550494 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da621d4b-84ff-4f2b-a8bf-db16fa054f4e" path="/var/lib/kubelet/pods/da621d4b-84ff-4f2b-a8bf-db16fa054f4e/volumes" Jan 22 16:15:05 crc kubenswrapper[4825]: I0122 16:15:05.553144 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:15:05 crc kubenswrapper[4825]: I0122 16:15:05.555047 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:15:05 crc kubenswrapper[4825]: I0122 16:15:05.555200 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" gracePeriod=600 Jan 22 16:15:05 crc kubenswrapper[4825]: E0122 16:15:05.862538 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:15:06 crc kubenswrapper[4825]: I0122 16:15:06.097849 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" exitCode=0 Jan 22 16:15:06 crc kubenswrapper[4825]: I0122 16:15:06.097947 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80"} Jan 22 16:15:06 crc kubenswrapper[4825]: I0122 16:15:06.098038 4825 scope.go:117] "RemoveContainer" containerID="2c3066f9e0f387705d530f10b82fa5e48f74d5b1b2427dd55665164230f71184" Jan 22 16:15:06 crc kubenswrapper[4825]: I0122 16:15:06.103907 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:15:06 crc kubenswrapper[4825]: E0122 16:15:06.107395 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:15:17 crc kubenswrapper[4825]: I0122 16:15:17.516771 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:15:17 crc kubenswrapper[4825]: E0122 16:15:17.517709 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.492354 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 16:15:27 crc kubenswrapper[4825]: E0122 16:15:27.493479 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7890406-bc83-4033-9d5c-13027e1791b7" containerName="collect-profiles" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.493500 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7890406-bc83-4033-9d5c-13027e1791b7" containerName="collect-profiles" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.493818 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7890406-bc83-4033-9d5c-13027e1791b7" containerName="collect-profiles" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.494849 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.498427 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.500597 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-config-data\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.500742 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.500814 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.501393 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.501391 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.502304 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tlqp5" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.510716 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691022 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691370 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691412 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691429 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691464 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwwk\" (UniqueName: \"kubernetes.io/projected/d022aa13-5f44-4fc9-8796-f86c575836ce-kube-api-access-vkwwk\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691515 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691567 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691609 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.691689 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-config-data\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.692751 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.693911 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-config-data\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.703631 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.793703 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.793808 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.793859 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.793891 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwwk\" (UniqueName: \"kubernetes.io/projected/d022aa13-5f44-4fc9-8796-f86c575836ce-kube-api-access-vkwwk\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.794036 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.794076 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.794488 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.794720 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.795231 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.797594 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.798028 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.811893 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwwk\" (UniqueName: \"kubernetes.io/projected/d022aa13-5f44-4fc9-8796-f86c575836ce-kube-api-access-vkwwk\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:27 crc kubenswrapper[4825]: I0122 16:15:27.851104 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " pod="openstack/tempest-tests-tempest" Jan 22 16:15:28 crc kubenswrapper[4825]: I0122 16:15:28.119253 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 16:15:28 crc kubenswrapper[4825]: I0122 16:15:28.629565 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:15:28 crc kubenswrapper[4825]: I0122 16:15:28.632757 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 16:15:29 crc kubenswrapper[4825]: I0122 16:15:29.422720 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d022aa13-5f44-4fc9-8796-f86c575836ce","Type":"ContainerStarted","Data":"66d7c2fe907621b6ab6b532de1bb3c3ad6baf25322ebcd54d13ca32ccea5a829"} Jan 22 16:15:29 crc kubenswrapper[4825]: I0122 16:15:29.518274 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:15:29 crc kubenswrapper[4825]: E0122 16:15:29.518579 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:15:40 crc kubenswrapper[4825]: I0122 16:15:40.518238 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:15:40 crc kubenswrapper[4825]: E0122 16:15:40.519142 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:15:45 crc kubenswrapper[4825]: I0122 16:15:45.498590 4825 scope.go:117] "RemoveContainer" containerID="d467e36b7b1faefb9f88e8bb60813f3e0e222f49ca7798b5f0ee66c5bd195fda" Jan 22 16:15:53 crc kubenswrapper[4825]: I0122 16:15:53.524162 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:15:53 crc kubenswrapper[4825]: E0122 16:15:53.524909 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:16:00 crc kubenswrapper[4825]: E0122 16:16:00.861686 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 22 16:16:00 crc kubenswrapper[4825]: E0122 16:16:00.862746 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkwwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d022aa13-5f44-4fc9-8796-f86c575836ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:16:00 crc kubenswrapper[4825]: E0122 16:16:00.863905 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d022aa13-5f44-4fc9-8796-f86c575836ce" Jan 22 16:16:00 crc kubenswrapper[4825]: E0122 16:16:00.881866 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d022aa13-5f44-4fc9-8796-f86c575836ce" Jan 22 16:16:04 crc kubenswrapper[4825]: I0122 16:16:04.517910 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:16:04 crc kubenswrapper[4825]: E0122 16:16:04.518912 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:16:15 crc kubenswrapper[4825]: I0122 16:16:15.002213 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 16:16:17 crc kubenswrapper[4825]: I0122 16:16:17.080430 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d022aa13-5f44-4fc9-8796-f86c575836ce","Type":"ContainerStarted","Data":"8bfa90e8826272620157b48e22094849fb948150eb7588fab720babdb6507b27"} Jan 22 16:16:17 crc kubenswrapper[4825]: I0122 16:16:17.107941 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.739701016 podStartE2EDuration="51.107893701s" podCreationTimestamp="2026-01-22 16:15:26 +0000 UTC" firstStartedPulling="2026-01-22 16:15:28.629241502 +0000 UTC m=+3075.390768412" lastFinishedPulling="2026-01-22 16:16:14.997434187 +0000 UTC m=+3121.758961097" observedRunningTime="2026-01-22 16:16:17.105720639 +0000 UTC m=+3123.867247549" watchObservedRunningTime="2026-01-22 16:16:17.107893701 +0000 UTC m=+3123.869420611" Jan 22 16:16:18 crc kubenswrapper[4825]: I0122 16:16:18.516874 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:16:18 crc kubenswrapper[4825]: E0122 16:16:18.517575 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:16:31 crc kubenswrapper[4825]: I0122 16:16:31.517801 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:16:31 crc kubenswrapper[4825]: E0122 16:16:31.518553 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:16:44 crc kubenswrapper[4825]: I0122 16:16:44.518154 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:16:44 crc kubenswrapper[4825]: E0122 16:16:44.519047 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:16:56 crc kubenswrapper[4825]: I0122 16:16:56.518457 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:16:56 crc kubenswrapper[4825]: E0122 16:16:56.519281 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:17:09 crc kubenswrapper[4825]: I0122 16:17:09.517785 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:17:09 crc kubenswrapper[4825]: E0122 16:17:09.518565 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:17:23 crc kubenswrapper[4825]: I0122 16:17:23.525857 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:17:23 crc kubenswrapper[4825]: E0122 16:17:23.526614 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:17:34 crc kubenswrapper[4825]: I0122 16:17:34.518535 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:17:34 crc kubenswrapper[4825]: E0122 16:17:34.519378 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:17:46 crc kubenswrapper[4825]: I0122 16:17:46.516965 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:17:46 crc kubenswrapper[4825]: E0122 16:17:46.517834 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:17:57 crc kubenswrapper[4825]: I0122 16:17:57.602887 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:17:57 crc kubenswrapper[4825]: E0122 16:17:57.605101 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:18:10 crc kubenswrapper[4825]: I0122 16:18:10.531079 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:18:10 crc kubenswrapper[4825]: E0122 16:18:10.531918 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:18:21 crc kubenswrapper[4825]: I0122 16:18:21.517291 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:18:21 crc kubenswrapper[4825]: E0122 16:18:21.518366 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:18:36 crc kubenswrapper[4825]: I0122 16:18:36.516695 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:18:36 crc kubenswrapper[4825]: E0122 16:18:36.517447 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:18:48 crc kubenswrapper[4825]: I0122 16:18:48.517582 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:18:48 crc kubenswrapper[4825]: E0122 16:18:48.518448 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:19:01 crc kubenswrapper[4825]: I0122 16:19:01.517785 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:19:01 crc kubenswrapper[4825]: E0122 16:19:01.518956 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:19:12 crc kubenswrapper[4825]: I0122 16:19:12.517088 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:19:12 crc kubenswrapper[4825]: E0122 16:19:12.518050 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.404577 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9r9bh"] Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.407765 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.422944 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9r9bh"] Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.520713 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-utilities\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.520757 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-catalog-content\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.521584 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svm5x\" (UniqueName: \"kubernetes.io/projected/71c712f5-c640-415d-b5e0-c90b6c4fd099-kube-api-access-svm5x\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.623845 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svm5x\" (UniqueName: \"kubernetes.io/projected/71c712f5-c640-415d-b5e0-c90b6c4fd099-kube-api-access-svm5x\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.623952 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-utilities\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.623970 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-catalog-content\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.624860 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-utilities\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.625237 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-catalog-content\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.643325 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svm5x\" (UniqueName: \"kubernetes.io/projected/71c712f5-c640-415d-b5e0-c90b6c4fd099-kube-api-access-svm5x\") pod \"certified-operators-9r9bh\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:21 crc kubenswrapper[4825]: I0122 16:19:21.734696 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:22 crc kubenswrapper[4825]: I0122 16:19:22.483782 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9r9bh"] Jan 22 16:19:22 crc kubenswrapper[4825]: I0122 16:19:22.953844 4825 generic.go:334] "Generic (PLEG): container finished" podID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerID="9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69" exitCode=0 Jan 22 16:19:22 crc kubenswrapper[4825]: I0122 16:19:22.953887 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerDied","Data":"9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69"} Jan 22 16:19:22 crc kubenswrapper[4825]: I0122 16:19:22.953918 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerStarted","Data":"5c1e6c3545487a24a4acd5d4e57e0e2d100ca4d4ca54505ba817be604eaf8480"} Jan 22 16:19:23 crc kubenswrapper[4825]: I0122 16:19:23.970413 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerStarted","Data":"01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b"} Jan 22 16:19:24 crc kubenswrapper[4825]: I0122 16:19:24.981453 4825 generic.go:334] "Generic (PLEG): container finished" podID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerID="01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b" exitCode=0 Jan 22 16:19:24 crc kubenswrapper[4825]: I0122 16:19:24.981513 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerDied","Data":"01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b"} Jan 22 16:19:25 crc kubenswrapper[4825]: I0122 16:19:25.517706 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:19:25 crc kubenswrapper[4825]: E0122 16:19:25.518391 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:19:25 crc kubenswrapper[4825]: I0122 16:19:25.995173 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerStarted","Data":"f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae"} Jan 22 16:19:26 crc kubenswrapper[4825]: I0122 16:19:26.013271 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9r9bh" podStartSLOduration=2.590114606 podStartE2EDuration="5.013236376s" podCreationTimestamp="2026-01-22 16:19:21 +0000 UTC" firstStartedPulling="2026-01-22 16:19:22.954972215 +0000 UTC m=+3309.716499125" lastFinishedPulling="2026-01-22 16:19:25.378093985 +0000 UTC m=+3312.139620895" observedRunningTime="2026-01-22 16:19:26.010148468 +0000 UTC m=+3312.771675378" watchObservedRunningTime="2026-01-22 16:19:26.013236376 +0000 UTC m=+3312.774763286" Jan 22 16:19:31 crc kubenswrapper[4825]: I0122 16:19:31.735069 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:31 crc kubenswrapper[4825]: I0122 16:19:31.735682 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:31 crc kubenswrapper[4825]: I0122 16:19:31.789332 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:32 crc kubenswrapper[4825]: I0122 16:19:32.119430 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:32 crc kubenswrapper[4825]: I0122 16:19:32.218815 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9r9bh"] Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.074154 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9r9bh" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="registry-server" containerID="cri-o://f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae" gracePeriod=2 Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.844891 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.908305 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svm5x\" (UniqueName: \"kubernetes.io/projected/71c712f5-c640-415d-b5e0-c90b6c4fd099-kube-api-access-svm5x\") pod \"71c712f5-c640-415d-b5e0-c90b6c4fd099\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.908645 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-utilities\") pod \"71c712f5-c640-415d-b5e0-c90b6c4fd099\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.908818 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-catalog-content\") pod \"71c712f5-c640-415d-b5e0-c90b6c4fd099\" (UID: \"71c712f5-c640-415d-b5e0-c90b6c4fd099\") " Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.909755 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-utilities" (OuterVolumeSpecName: "utilities") pod "71c712f5-c640-415d-b5e0-c90b6c4fd099" (UID: "71c712f5-c640-415d-b5e0-c90b6c4fd099"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.917247 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c712f5-c640-415d-b5e0-c90b6c4fd099-kube-api-access-svm5x" (OuterVolumeSpecName: "kube-api-access-svm5x") pod "71c712f5-c640-415d-b5e0-c90b6c4fd099" (UID: "71c712f5-c640-415d-b5e0-c90b6c4fd099"). InnerVolumeSpecName "kube-api-access-svm5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:19:34 crc kubenswrapper[4825]: I0122 16:19:34.984596 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c712f5-c640-415d-b5e0-c90b6c4fd099" (UID: "71c712f5-c640-415d-b5e0-c90b6c4fd099"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.011355 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svm5x\" (UniqueName: \"kubernetes.io/projected/71c712f5-c640-415d-b5e0-c90b6c4fd099-kube-api-access-svm5x\") on node \"crc\" DevicePath \"\"" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.011399 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.011412 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c712f5-c640-415d-b5e0-c90b6c4fd099-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.084449 4825 generic.go:334] "Generic (PLEG): container finished" podID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerID="f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae" exitCode=0 Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.084658 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerDied","Data":"f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae"} Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.085551 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r9bh" event={"ID":"71c712f5-c640-415d-b5e0-c90b6c4fd099","Type":"ContainerDied","Data":"5c1e6c3545487a24a4acd5d4e57e0e2d100ca4d4ca54505ba817be604eaf8480"} Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.085623 4825 scope.go:117] "RemoveContainer" containerID="f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.084806 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r9bh" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.127860 4825 scope.go:117] "RemoveContainer" containerID="01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.140368 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9r9bh"] Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.157480 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9r9bh"] Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.218101 4825 scope.go:117] "RemoveContainer" containerID="9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.287330 4825 scope.go:117] "RemoveContainer" containerID="f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae" Jan 22 16:19:35 crc kubenswrapper[4825]: E0122 16:19:35.288618 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae\": container with ID starting with f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae not found: ID does not exist" containerID="f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.288660 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae"} err="failed to get container status \"f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae\": rpc error: code = NotFound desc = could not find container \"f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae\": container with ID starting with f65d3be7ba8883cb0f5f264565f494df9f618e1943c6df679e0931965a77f3ae not found: ID does not exist" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.288687 4825 scope.go:117] "RemoveContainer" containerID="01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b" Jan 22 16:19:35 crc kubenswrapper[4825]: E0122 16:19:35.289179 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b\": container with ID starting with 01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b not found: ID does not exist" containerID="01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.289324 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b"} err="failed to get container status \"01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b\": rpc error: code = NotFound desc = could not find container \"01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b\": container with ID starting with 01a9088eac8ec4a031d5c0b4ee65246374c5665cd24a7f488c6c065712d47b0b not found: ID does not exist" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.289453 4825 scope.go:117] "RemoveContainer" containerID="9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69" Jan 22 16:19:35 crc kubenswrapper[4825]: E0122 16:19:35.290241 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69\": container with ID starting with 9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69 not found: ID does not exist" containerID="9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.290357 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69"} err="failed to get container status \"9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69\": rpc error: code = NotFound desc = could not find container \"9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69\": container with ID starting with 9d13f5f77612f4fd6a383549a7664de7f6e55241828159021e42992c0cf88f69 not found: ID does not exist" Jan 22 16:19:35 crc kubenswrapper[4825]: I0122 16:19:35.530188 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" path="/var/lib/kubelet/pods/71c712f5-c640-415d-b5e0-c90b6c4fd099/volumes" Jan 22 16:19:38 crc kubenswrapper[4825]: I0122 16:19:38.517735 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:19:38 crc kubenswrapper[4825]: E0122 16:19:38.518731 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:19:51 crc kubenswrapper[4825]: I0122 16:19:51.531253 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:19:51 crc kubenswrapper[4825]: E0122 16:19:51.532105 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:20:05 crc kubenswrapper[4825]: I0122 16:20:05.519461 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:20:05 crc kubenswrapper[4825]: E0122 16:20:05.520208 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.971202 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8scwq"] Jan 22 16:20:14 crc kubenswrapper[4825]: E0122 16:20:14.972665 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="extract-content" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.972699 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="extract-content" Jan 22 16:20:14 crc kubenswrapper[4825]: E0122 16:20:14.972760 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="extract-utilities" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.972775 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="extract-utilities" Jan 22 16:20:14 crc kubenswrapper[4825]: E0122 16:20:14.972819 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="registry-server" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.972832 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="registry-server" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.973249 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c712f5-c640-415d-b5e0-c90b6c4fd099" containerName="registry-server" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.976019 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:14 crc kubenswrapper[4825]: I0122 16:20:14.987906 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8scwq"] Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.170740 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-utilities\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.170934 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-catalog-content\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.171070 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvgv\" (UniqueName: \"kubernetes.io/projected/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-kube-api-access-7mvgv\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.273122 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-utilities\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.273184 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-catalog-content\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.273245 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvgv\" (UniqueName: \"kubernetes.io/projected/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-kube-api-access-7mvgv\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.273792 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-catalog-content\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.274149 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-utilities\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.293890 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvgv\" (UniqueName: \"kubernetes.io/projected/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-kube-api-access-7mvgv\") pod \"redhat-operators-8scwq\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.309747 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:15 crc kubenswrapper[4825]: I0122 16:20:15.798156 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8scwq"] Jan 22 16:20:16 crc kubenswrapper[4825]: I0122 16:20:16.758102 4825 generic.go:334] "Generic (PLEG): container finished" podID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerID="df4e4ae7506f2ffe58102615d8e66ba560464627f86585a3abb510f777bc4599" exitCode=0 Jan 22 16:20:16 crc kubenswrapper[4825]: I0122 16:20:16.758396 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerDied","Data":"df4e4ae7506f2ffe58102615d8e66ba560464627f86585a3abb510f777bc4599"} Jan 22 16:20:16 crc kubenswrapper[4825]: I0122 16:20:16.758436 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerStarted","Data":"baae122a106fe2b679985c3c4442449ed4e5109c1d8b40f8bfecc9e89fa81dbe"} Jan 22 16:20:18 crc kubenswrapper[4825]: I0122 16:20:18.787577 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerStarted","Data":"c40f5515e80c654bef55f3f85084bd9ebc83366ccd4232c6f30269bcc24cc943"} Jan 22 16:20:20 crc kubenswrapper[4825]: I0122 16:20:20.517427 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:20:21 crc kubenswrapper[4825]: I0122 16:20:21.821126 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"3e05690f72d9972e52963cd90f0219528b87bcd469134ce86f8ebbe78d329a4d"} Jan 22 16:20:23 crc kubenswrapper[4825]: I0122 16:20:23.872647 4825 generic.go:334] "Generic (PLEG): container finished" podID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerID="c40f5515e80c654bef55f3f85084bd9ebc83366ccd4232c6f30269bcc24cc943" exitCode=0 Jan 22 16:20:23 crc kubenswrapper[4825]: I0122 16:20:23.873161 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerDied","Data":"c40f5515e80c654bef55f3f85084bd9ebc83366ccd4232c6f30269bcc24cc943"} Jan 22 16:20:24 crc kubenswrapper[4825]: I0122 16:20:24.886317 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerStarted","Data":"b498613f5f2dcd0ad63095c8982974fedcd46ca242d765b4985213d2fdde3c50"} Jan 22 16:20:24 crc kubenswrapper[4825]: I0122 16:20:24.912727 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8scwq" podStartSLOduration=3.305173215 podStartE2EDuration="10.912684789s" podCreationTimestamp="2026-01-22 16:20:14 +0000 UTC" firstStartedPulling="2026-01-22 16:20:16.76123166 +0000 UTC m=+3363.522758570" lastFinishedPulling="2026-01-22 16:20:24.368743234 +0000 UTC m=+3371.130270144" observedRunningTime="2026-01-22 16:20:24.90431975 +0000 UTC m=+3371.665846660" watchObservedRunningTime="2026-01-22 16:20:24.912684789 +0000 UTC m=+3371.674211709" Jan 22 16:20:25 crc kubenswrapper[4825]: I0122 16:20:25.311388 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:25 crc kubenswrapper[4825]: I0122 16:20:25.311727 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:26 crc kubenswrapper[4825]: I0122 16:20:26.362110 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8scwq" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="registry-server" probeResult="failure" output=< Jan 22 16:20:26 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 16:20:26 crc kubenswrapper[4825]: > Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.380627 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g2r64"] Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.383664 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.397940 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2r64"] Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.564539 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-catalog-content\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.564917 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-utilities\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.565127 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsk6d\" (UniqueName: \"kubernetes.io/projected/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-kube-api-access-lsk6d\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.667748 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-catalog-content\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.667826 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-utilities\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.667993 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsk6d\" (UniqueName: \"kubernetes.io/projected/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-kube-api-access-lsk6d\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.668748 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-utilities\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.669435 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-catalog-content\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.694840 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsk6d\" (UniqueName: \"kubernetes.io/projected/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-kube-api-access-lsk6d\") pod \"community-operators-g2r64\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:31 crc kubenswrapper[4825]: I0122 16:20:31.711212 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:32 crc kubenswrapper[4825]: I0122 16:20:32.287774 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2r64"] Jan 22 16:20:33 crc kubenswrapper[4825]: I0122 16:20:33.287324 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerID="134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e" exitCode=0 Jan 22 16:20:33 crc kubenswrapper[4825]: I0122 16:20:33.287546 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerDied","Data":"134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e"} Jan 22 16:20:33 crc kubenswrapper[4825]: I0122 16:20:33.287862 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerStarted","Data":"6f3bd51bd64e3ff96900dcaf9fcdcd4bdbd901cc8eae6d3133fa98adf3d87c4b"} Jan 22 16:20:33 crc kubenswrapper[4825]: I0122 16:20:33.290611 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:20:34 crc kubenswrapper[4825]: I0122 16:20:34.299281 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerStarted","Data":"d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8"} Jan 22 16:20:35 crc kubenswrapper[4825]: I0122 16:20:35.312949 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerID="d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8" exitCode=0 Jan 22 16:20:35 crc kubenswrapper[4825]: I0122 16:20:35.313014 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerDied","Data":"d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8"} Jan 22 16:20:36 crc kubenswrapper[4825]: I0122 16:20:36.328596 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerStarted","Data":"b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b"} Jan 22 16:20:36 crc kubenswrapper[4825]: I0122 16:20:36.347747 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g2r64" podStartSLOduration=2.891987167 podStartE2EDuration="5.347729019s" podCreationTimestamp="2026-01-22 16:20:31 +0000 UTC" firstStartedPulling="2026-01-22 16:20:33.290166938 +0000 UTC m=+3380.051693868" lastFinishedPulling="2026-01-22 16:20:35.74590881 +0000 UTC m=+3382.507435720" observedRunningTime="2026-01-22 16:20:36.345871286 +0000 UTC m=+3383.107398196" watchObservedRunningTime="2026-01-22 16:20:36.347729019 +0000 UTC m=+3383.109255929" Jan 22 16:20:36 crc kubenswrapper[4825]: I0122 16:20:36.446757 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8scwq" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="registry-server" probeResult="failure" output=< Jan 22 16:20:36 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 16:20:36 crc kubenswrapper[4825]: > Jan 22 16:20:41 crc kubenswrapper[4825]: I0122 16:20:41.712751 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:41 crc kubenswrapper[4825]: I0122 16:20:41.713242 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:41 crc kubenswrapper[4825]: I0122 16:20:41.778731 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:42 crc kubenswrapper[4825]: I0122 16:20:42.481642 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:42 crc kubenswrapper[4825]: I0122 16:20:42.545601 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2r64"] Jan 22 16:20:44 crc kubenswrapper[4825]: I0122 16:20:44.541753 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g2r64" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="registry-server" containerID="cri-o://b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b" gracePeriod=2 Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.430440 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.466166 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.515382 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-utilities\") pod \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.515452 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsk6d\" (UniqueName: \"kubernetes.io/projected/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-kube-api-access-lsk6d\") pod \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.515523 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-catalog-content\") pod \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\" (UID: \"0c7c9794-6797-45c2-b637-4fc0f16ca9d5\") " Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.515520 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.517971 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-utilities" (OuterVolumeSpecName: "utilities") pod "0c7c9794-6797-45c2-b637-4fc0f16ca9d5" (UID: "0c7c9794-6797-45c2-b637-4fc0f16ca9d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.518813 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.524809 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-kube-api-access-lsk6d" (OuterVolumeSpecName: "kube-api-access-lsk6d") pod "0c7c9794-6797-45c2-b637-4fc0f16ca9d5" (UID: "0c7c9794-6797-45c2-b637-4fc0f16ca9d5"). InnerVolumeSpecName "kube-api-access-lsk6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.577880 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerID="b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b" exitCode=0 Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.577989 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerDied","Data":"b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b"} Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.578016 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2r64" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.578041 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2r64" event={"ID":"0c7c9794-6797-45c2-b637-4fc0f16ca9d5","Type":"ContainerDied","Data":"6f3bd51bd64e3ff96900dcaf9fcdcd4bdbd901cc8eae6d3133fa98adf3d87c4b"} Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.578084 4825 scope.go:117] "RemoveContainer" containerID="b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.618634 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c7c9794-6797-45c2-b637-4fc0f16ca9d5" (UID: "0c7c9794-6797-45c2-b637-4fc0f16ca9d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.622169 4825 scope.go:117] "RemoveContainer" containerID="d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.623912 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsk6d\" (UniqueName: \"kubernetes.io/projected/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-kube-api-access-lsk6d\") on node \"crc\" DevicePath \"\"" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.623944 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7c9794-6797-45c2-b637-4fc0f16ca9d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.684269 4825 scope.go:117] "RemoveContainer" containerID="134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.740186 4825 scope.go:117] "RemoveContainer" containerID="b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b" Jan 22 16:20:45 crc kubenswrapper[4825]: E0122 16:20:45.743138 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b\": container with ID starting with b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b not found: ID does not exist" containerID="b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.743185 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b"} err="failed to get container status \"b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b\": rpc error: code = NotFound desc = could not find container \"b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b\": container with ID starting with b8676f3372cd9babbfa4880db8c170d2b4e344c65a710cef9f47ae466bc2011b not found: ID does not exist" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.743210 4825 scope.go:117] "RemoveContainer" containerID="d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8" Jan 22 16:20:45 crc kubenswrapper[4825]: E0122 16:20:45.752143 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8\": container with ID starting with d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8 not found: ID does not exist" containerID="d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.752190 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8"} err="failed to get container status \"d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8\": rpc error: code = NotFound desc = could not find container \"d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8\": container with ID starting with d1be315cef444b6f8f9f27763f4a4ccdc3faa4c5f6c66c08ed85e15cd485bcf8 not found: ID does not exist" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.752222 4825 scope.go:117] "RemoveContainer" containerID="134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e" Jan 22 16:20:45 crc kubenswrapper[4825]: E0122 16:20:45.754531 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e\": container with ID starting with 134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e not found: ID does not exist" containerID="134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.754583 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e"} err="failed to get container status \"134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e\": rpc error: code = NotFound desc = could not find container \"134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e\": container with ID starting with 134d4edda10355dd1cee1853adbfc63104f55093e799eac25c7ab44e67a3879e not found: ID does not exist" Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.913920 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2r64"] Jan 22 16:20:45 crc kubenswrapper[4825]: I0122 16:20:45.925611 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g2r64"] Jan 22 16:20:47 crc kubenswrapper[4825]: I0122 16:20:47.639776 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" path="/var/lib/kubelet/pods/0c7c9794-6797-45c2-b637-4fc0f16ca9d5/volumes" Jan 22 16:20:47 crc kubenswrapper[4825]: I0122 16:20:47.817443 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8scwq"] Jan 22 16:20:47 crc kubenswrapper[4825]: I0122 16:20:47.817784 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8scwq" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="registry-server" containerID="cri-o://b498613f5f2dcd0ad63095c8982974fedcd46ca242d765b4985213d2fdde3c50" gracePeriod=2 Jan 22 16:20:48 crc kubenswrapper[4825]: I0122 16:20:48.661057 4825 generic.go:334] "Generic (PLEG): container finished" podID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerID="b498613f5f2dcd0ad63095c8982974fedcd46ca242d765b4985213d2fdde3c50" exitCode=0 Jan 22 16:20:48 crc kubenswrapper[4825]: I0122 16:20:48.661131 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerDied","Data":"b498613f5f2dcd0ad63095c8982974fedcd46ca242d765b4985213d2fdde3c50"} Jan 22 16:20:48 crc kubenswrapper[4825]: I0122 16:20:48.872661 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.058297 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-utilities\") pod \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.058375 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-catalog-content\") pod \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.058640 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mvgv\" (UniqueName: \"kubernetes.io/projected/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-kube-api-access-7mvgv\") pod \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\" (UID: \"5420d280-e38e-4ccf-9c98-6e9b41cd70c6\") " Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.059403 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-utilities" (OuterVolumeSpecName: "utilities") pod "5420d280-e38e-4ccf-9c98-6e9b41cd70c6" (UID: "5420d280-e38e-4ccf-9c98-6e9b41cd70c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.065289 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-kube-api-access-7mvgv" (OuterVolumeSpecName: "kube-api-access-7mvgv") pod "5420d280-e38e-4ccf-9c98-6e9b41cd70c6" (UID: "5420d280-e38e-4ccf-9c98-6e9b41cd70c6"). InnerVolumeSpecName "kube-api-access-7mvgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.179280 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5420d280-e38e-4ccf-9c98-6e9b41cd70c6" (UID: "5420d280-e38e-4ccf-9c98-6e9b41cd70c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.477243 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mvgv\" (UniqueName: \"kubernetes.io/projected/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-kube-api-access-7mvgv\") on node \"crc\" DevicePath \"\"" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.477798 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.478047 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5420d280-e38e-4ccf-9c98-6e9b41cd70c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.697660 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8scwq" event={"ID":"5420d280-e38e-4ccf-9c98-6e9b41cd70c6","Type":"ContainerDied","Data":"baae122a106fe2b679985c3c4442449ed4e5109c1d8b40f8bfecc9e89fa81dbe"} Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.697732 4825 scope.go:117] "RemoveContainer" containerID="b498613f5f2dcd0ad63095c8982974fedcd46ca242d765b4985213d2fdde3c50" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.697963 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8scwq" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.761123 4825 scope.go:117] "RemoveContainer" containerID="c40f5515e80c654bef55f3f85084bd9ebc83366ccd4232c6f30269bcc24cc943" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.795606 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8scwq"] Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.798507 4825 scope.go:117] "RemoveContainer" containerID="df4e4ae7506f2ffe58102615d8e66ba560464627f86585a3abb510f777bc4599" Jan 22 16:20:49 crc kubenswrapper[4825]: I0122 16:20:49.804196 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8scwq"] Jan 22 16:20:51 crc kubenswrapper[4825]: I0122 16:20:51.538082 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" path="/var/lib/kubelet/pods/5420d280-e38e-4ccf-9c98-6e9b41cd70c6/volumes" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.007545 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mm2jb"] Jan 22 16:22:21 crc kubenswrapper[4825]: E0122 16:22:21.008358 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="extract-utilities" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008372 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="extract-utilities" Jan 22 16:22:21 crc kubenswrapper[4825]: E0122 16:22:21.008396 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="extract-utilities" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008401 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="extract-utilities" Jan 22 16:22:21 crc kubenswrapper[4825]: E0122 16:22:21.008430 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="extract-content" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008437 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="extract-content" Jan 22 16:22:21 crc kubenswrapper[4825]: E0122 16:22:21.008445 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="registry-server" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008450 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="registry-server" Jan 22 16:22:21 crc kubenswrapper[4825]: E0122 16:22:21.008457 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="registry-server" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008463 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="registry-server" Jan 22 16:22:21 crc kubenswrapper[4825]: E0122 16:22:21.008480 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="extract-content" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008486 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="extract-content" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008696 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7c9794-6797-45c2-b637-4fc0f16ca9d5" containerName="registry-server" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.008716 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5420d280-e38e-4ccf-9c98-6e9b41cd70c6" containerName="registry-server" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.011129 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.027648 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm2jb"] Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.110833 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-catalog-content\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.110926 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfq8\" (UniqueName: \"kubernetes.io/projected/8120efd4-e79b-449f-b2c1-14f90134b348-kube-api-access-qjfq8\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.110955 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-utilities\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.213340 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-catalog-content\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.213440 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfq8\" (UniqueName: \"kubernetes.io/projected/8120efd4-e79b-449f-b2c1-14f90134b348-kube-api-access-qjfq8\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.213467 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-utilities\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.214177 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-utilities\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.214381 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-catalog-content\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.522114 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfq8\" (UniqueName: \"kubernetes.io/projected/8120efd4-e79b-449f-b2c1-14f90134b348-kube-api-access-qjfq8\") pod \"redhat-marketplace-mm2jb\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:21 crc kubenswrapper[4825]: I0122 16:22:21.525227 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:22 crc kubenswrapper[4825]: I0122 16:22:22.087632 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm2jb"] Jan 22 16:22:22 crc kubenswrapper[4825]: I0122 16:22:22.114099 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm2jb" event={"ID":"8120efd4-e79b-449f-b2c1-14f90134b348","Type":"ContainerStarted","Data":"02616cf0c465c4a1b33be47add36c35dcf7652a7cc5c1828d4642acc930f1cc1"} Jan 22 16:22:23 crc kubenswrapper[4825]: I0122 16:22:23.125407 4825 generic.go:334] "Generic (PLEG): container finished" podID="8120efd4-e79b-449f-b2c1-14f90134b348" containerID="467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6" exitCode=0 Jan 22 16:22:23 crc kubenswrapper[4825]: I0122 16:22:23.125486 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm2jb" event={"ID":"8120efd4-e79b-449f-b2c1-14f90134b348","Type":"ContainerDied","Data":"467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6"} Jan 22 16:22:25 crc kubenswrapper[4825]: I0122 16:22:25.199765 4825 generic.go:334] "Generic (PLEG): container finished" podID="8120efd4-e79b-449f-b2c1-14f90134b348" containerID="dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe" exitCode=0 Jan 22 16:22:25 crc kubenswrapper[4825]: I0122 16:22:25.200223 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm2jb" event={"ID":"8120efd4-e79b-449f-b2c1-14f90134b348","Type":"ContainerDied","Data":"dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe"} Jan 22 16:22:26 crc kubenswrapper[4825]: I0122 16:22:26.287938 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm2jb" event={"ID":"8120efd4-e79b-449f-b2c1-14f90134b348","Type":"ContainerStarted","Data":"35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d"} Jan 22 16:22:26 crc kubenswrapper[4825]: I0122 16:22:26.314506 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mm2jb" podStartSLOduration=3.815655782 podStartE2EDuration="6.314485485s" podCreationTimestamp="2026-01-22 16:22:20 +0000 UTC" firstStartedPulling="2026-01-22 16:22:23.127697702 +0000 UTC m=+3489.889224612" lastFinishedPulling="2026-01-22 16:22:25.626527395 +0000 UTC m=+3492.388054315" observedRunningTime="2026-01-22 16:22:26.307720832 +0000 UTC m=+3493.069247742" watchObservedRunningTime="2026-01-22 16:22:26.314485485 +0000 UTC m=+3493.076012395" Jan 22 16:22:31 crc kubenswrapper[4825]: I0122 16:22:31.528796 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:31 crc kubenswrapper[4825]: I0122 16:22:31.529390 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:31 crc kubenswrapper[4825]: I0122 16:22:31.581729 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:32 crc kubenswrapper[4825]: I0122 16:22:32.406666 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:32 crc kubenswrapper[4825]: I0122 16:22:32.461214 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm2jb"] Jan 22 16:22:34 crc kubenswrapper[4825]: I0122 16:22:34.376292 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mm2jb" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="registry-server" containerID="cri-o://35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d" gracePeriod=2 Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.387615 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.390113 4825 generic.go:334] "Generic (PLEG): container finished" podID="8120efd4-e79b-449f-b2c1-14f90134b348" containerID="35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d" exitCode=0 Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.390155 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm2jb" event={"ID":"8120efd4-e79b-449f-b2c1-14f90134b348","Type":"ContainerDied","Data":"35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d"} Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.390451 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm2jb" event={"ID":"8120efd4-e79b-449f-b2c1-14f90134b348","Type":"ContainerDied","Data":"02616cf0c465c4a1b33be47add36c35dcf7652a7cc5c1828d4642acc930f1cc1"} Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.390489 4825 scope.go:117] "RemoveContainer" containerID="35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.433386 4825 scope.go:117] "RemoveContainer" containerID="dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.441736 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjfq8\" (UniqueName: \"kubernetes.io/projected/8120efd4-e79b-449f-b2c1-14f90134b348-kube-api-access-qjfq8\") pod \"8120efd4-e79b-449f-b2c1-14f90134b348\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.442151 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-utilities\") pod \"8120efd4-e79b-449f-b2c1-14f90134b348\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.442412 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-catalog-content\") pod \"8120efd4-e79b-449f-b2c1-14f90134b348\" (UID: \"8120efd4-e79b-449f-b2c1-14f90134b348\") " Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.443664 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-utilities" (OuterVolumeSpecName: "utilities") pod "8120efd4-e79b-449f-b2c1-14f90134b348" (UID: "8120efd4-e79b-449f-b2c1-14f90134b348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.449102 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8120efd4-e79b-449f-b2c1-14f90134b348-kube-api-access-qjfq8" (OuterVolumeSpecName: "kube-api-access-qjfq8") pod "8120efd4-e79b-449f-b2c1-14f90134b348" (UID: "8120efd4-e79b-449f-b2c1-14f90134b348"). InnerVolumeSpecName "kube-api-access-qjfq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.476352 4825 scope.go:117] "RemoveContainer" containerID="467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.479503 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8120efd4-e79b-449f-b2c1-14f90134b348" (UID: "8120efd4-e79b-449f-b2c1-14f90134b348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.541974 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.542042 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.544630 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.544649 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjfq8\" (UniqueName: \"kubernetes.io/projected/8120efd4-e79b-449f-b2c1-14f90134b348-kube-api-access-qjfq8\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.544659 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8120efd4-e79b-449f-b2c1-14f90134b348-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.603697 4825 scope.go:117] "RemoveContainer" containerID="35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d" Jan 22 16:22:35 crc kubenswrapper[4825]: E0122 16:22:35.604717 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d\": container with ID starting with 35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d not found: ID does not exist" containerID="35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.604764 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d"} err="failed to get container status \"35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d\": rpc error: code = NotFound desc = could not find container \"35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d\": container with ID starting with 35f630abcac9dc9cf71da8a92fa2d9e9a9f602f311c3b35d74a112b258c0553d not found: ID does not exist" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.604817 4825 scope.go:117] "RemoveContainer" containerID="dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe" Jan 22 16:22:35 crc kubenswrapper[4825]: E0122 16:22:35.605928 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe\": container with ID starting with dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe not found: ID does not exist" containerID="dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.605964 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe"} err="failed to get container status \"dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe\": rpc error: code = NotFound desc = could not find container \"dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe\": container with ID starting with dcdc40db816299cee459c485731120cce5b9f46d81b43e4232659a6ec8421ebe not found: ID does not exist" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.606075 4825 scope.go:117] "RemoveContainer" containerID="467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6" Jan 22 16:22:35 crc kubenswrapper[4825]: E0122 16:22:35.606352 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6\": container with ID starting with 467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6 not found: ID does not exist" containerID="467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6" Jan 22 16:22:35 crc kubenswrapper[4825]: I0122 16:22:35.606403 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6"} err="failed to get container status \"467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6\": rpc error: code = NotFound desc = could not find container \"467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6\": container with ID starting with 467929221c89b5702ccb2b102644ca3ce10050283f45bd6fcde50ac5338f72c6 not found: ID does not exist" Jan 22 16:22:36 crc kubenswrapper[4825]: I0122 16:22:36.497359 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm2jb" Jan 22 16:22:36 crc kubenswrapper[4825]: I0122 16:22:36.527149 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm2jb"] Jan 22 16:22:36 crc kubenswrapper[4825]: I0122 16:22:36.544912 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm2jb"] Jan 22 16:22:37 crc kubenswrapper[4825]: I0122 16:22:37.574684 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" path="/var/lib/kubelet/pods/8120efd4-e79b-449f-b2c1-14f90134b348/volumes" Jan 22 16:22:38 crc kubenswrapper[4825]: I0122 16:22:38.532386 4825 generic.go:334] "Generic (PLEG): container finished" podID="d022aa13-5f44-4fc9-8796-f86c575836ce" containerID="8bfa90e8826272620157b48e22094849fb948150eb7588fab720babdb6507b27" exitCode=0 Jan 22 16:22:38 crc kubenswrapper[4825]: I0122 16:22:38.532507 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d022aa13-5f44-4fc9-8796-f86c575836ce","Type":"ContainerDied","Data":"8bfa90e8826272620157b48e22094849fb948150eb7588fab720babdb6507b27"} Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.245899 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402243 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ssh-key\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402450 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402486 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-config-data\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402530 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-temporary\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402619 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402660 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config-secret\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402714 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ca-certs\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402742 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkwwk\" (UniqueName: \"kubernetes.io/projected/d022aa13-5f44-4fc9-8796-f86c575836ce-kube-api-access-vkwwk\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.402901 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-workdir\") pod \"d022aa13-5f44-4fc9-8796-f86c575836ce\" (UID: \"d022aa13-5f44-4fc9-8796-f86c575836ce\") " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.403792 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-config-data" (OuterVolumeSpecName: "config-data") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.403805 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.410603 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d022aa13-5f44-4fc9-8796-f86c575836ce-kube-api-access-vkwwk" (OuterVolumeSpecName: "kube-api-access-vkwwk") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "kube-api-access-vkwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.411005 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.442086 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.445081 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.462231 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.468017 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505821 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505857 4825 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505866 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkwwk\" (UniqueName: \"kubernetes.io/projected/d022aa13-5f44-4fc9-8796-f86c575836ce-kube-api-access-vkwwk\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505874 4825 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d022aa13-5f44-4fc9-8796-f86c575836ce-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505883 4825 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505892 4825 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d022aa13-5f44-4fc9-8796-f86c575836ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505901 4825 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.505939 4825 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.527938 4825 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.554405 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d022aa13-5f44-4fc9-8796-f86c575836ce","Type":"ContainerDied","Data":"66d7c2fe907621b6ab6b532de1bb3c3ad6baf25322ebcd54d13ca32ccea5a829"} Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.554452 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d7c2fe907621b6ab6b532de1bb3c3ad6baf25322ebcd54d13ca32ccea5a829" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.554498 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.610082 4825 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.844421 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d022aa13-5f44-4fc9-8796-f86c575836ce" (UID: "d022aa13-5f44-4fc9-8796-f86c575836ce"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:22:40 crc kubenswrapper[4825]: I0122 16:22:40.916176 4825 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d022aa13-5f44-4fc9-8796-f86c575836ce-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.066862 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 16:22:51 crc kubenswrapper[4825]: E0122 16:22:51.067896 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d022aa13-5f44-4fc9-8796-f86c575836ce" containerName="tempest-tests-tempest-tests-runner" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.067912 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d022aa13-5f44-4fc9-8796-f86c575836ce" containerName="tempest-tests-tempest-tests-runner" Jan 22 16:22:51 crc kubenswrapper[4825]: E0122 16:22:51.067935 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="registry-server" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.067942 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="registry-server" Jan 22 16:22:51 crc kubenswrapper[4825]: E0122 16:22:51.067950 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="extract-content" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.067956 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="extract-content" Jan 22 16:22:51 crc kubenswrapper[4825]: E0122 16:22:51.067999 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="extract-utilities" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.068008 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="extract-utilities" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.068230 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="8120efd4-e79b-449f-b2c1-14f90134b348" containerName="registry-server" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.068246 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d022aa13-5f44-4fc9-8796-f86c575836ce" containerName="tempest-tests-tempest-tests-runner" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.069123 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.072390 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tlqp5" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.076791 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.365715 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrtpw\" (UniqueName: \"kubernetes.io/projected/cdf78686-853e-48db-b28b-19a3494c7296-kube-api-access-lrtpw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.365800 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.467540 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrtpw\" (UniqueName: \"kubernetes.io/projected/cdf78686-853e-48db-b28b-19a3494c7296-kube-api-access-lrtpw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.467612 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.468238 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.503299 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrtpw\" (UniqueName: \"kubernetes.io/projected/cdf78686-853e-48db-b28b-19a3494c7296-kube-api-access-lrtpw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.522052 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"cdf78686-853e-48db-b28b-19a3494c7296\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:51 crc kubenswrapper[4825]: I0122 16:22:51.570268 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 16:22:52 crc kubenswrapper[4825]: I0122 16:22:52.059054 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 16:22:52 crc kubenswrapper[4825]: I0122 16:22:52.880372 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"cdf78686-853e-48db-b28b-19a3494c7296","Type":"ContainerStarted","Data":"2e922bc760092df5567e70c691566722d6ddb5aab662587b488340929ea56fae"} Jan 22 16:22:54 crc kubenswrapper[4825]: I0122 16:22:54.911677 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"cdf78686-853e-48db-b28b-19a3494c7296","Type":"ContainerStarted","Data":"d7fa5d475799a39ef57258200edc2404efe586f47454a65c4dfeddfbc2e7615a"} Jan 22 16:22:54 crc kubenswrapper[4825]: I0122 16:22:54.930055 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.540032225 podStartE2EDuration="3.930007353s" podCreationTimestamp="2026-01-22 16:22:51 +0000 UTC" firstStartedPulling="2026-01-22 16:22:52.061780228 +0000 UTC m=+3518.823307148" lastFinishedPulling="2026-01-22 16:22:53.451755366 +0000 UTC m=+3520.213282276" observedRunningTime="2026-01-22 16:22:54.92532427 +0000 UTC m=+3521.686851180" watchObservedRunningTime="2026-01-22 16:22:54.930007353 +0000 UTC m=+3521.691534273" Jan 22 16:23:05 crc kubenswrapper[4825]: I0122 16:23:05.542242 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:23:05 crc kubenswrapper[4825]: I0122 16:23:05.543627 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.113037 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qxnm4/must-gather-cbc8c"] Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.117747 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.122950 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qxnm4"/"openshift-service-ca.crt" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.123487 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qxnm4"/"kube-root-ca.crt" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.123617 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qxnm4"/"default-dockercfg-7vdzv" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.131444 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qxnm4/must-gather-cbc8c"] Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.298641 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4z75\" (UniqueName: \"kubernetes.io/projected/4a21cb6a-1ef4-4510-8915-ed2e8024268f-kube-api-access-x4z75\") pod \"must-gather-cbc8c\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.298914 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a21cb6a-1ef4-4510-8915-ed2e8024268f-must-gather-output\") pod \"must-gather-cbc8c\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.402353 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a21cb6a-1ef4-4510-8915-ed2e8024268f-must-gather-output\") pod \"must-gather-cbc8c\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.402473 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4z75\" (UniqueName: \"kubernetes.io/projected/4a21cb6a-1ef4-4510-8915-ed2e8024268f-kube-api-access-x4z75\") pod \"must-gather-cbc8c\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.403158 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a21cb6a-1ef4-4510-8915-ed2e8024268f-must-gather-output\") pod \"must-gather-cbc8c\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.420507 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4z75\" (UniqueName: \"kubernetes.io/projected/4a21cb6a-1ef4-4510-8915-ed2e8024268f-kube-api-access-x4z75\") pod \"must-gather-cbc8c\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:27 crc kubenswrapper[4825]: I0122 16:23:27.460392 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:23:28 crc kubenswrapper[4825]: I0122 16:23:28.232372 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qxnm4/must-gather-cbc8c"] Jan 22 16:23:28 crc kubenswrapper[4825]: I0122 16:23:28.438417 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" event={"ID":"4a21cb6a-1ef4-4510-8915-ed2e8024268f","Type":"ContainerStarted","Data":"a2bf163328e1557a437ea5c2540eff4618dc417d7b0142cfa83bace94b5092b7"} Jan 22 16:23:35 crc kubenswrapper[4825]: I0122 16:23:35.541947 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:23:35 crc kubenswrapper[4825]: I0122 16:23:35.542465 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:23:35 crc kubenswrapper[4825]: I0122 16:23:35.542520 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:23:35 crc kubenswrapper[4825]: I0122 16:23:35.543767 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e05690f72d9972e52963cd90f0219528b87bcd469134ce86f8ebbe78d329a4d"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:23:35 crc kubenswrapper[4825]: I0122 16:23:35.543843 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://3e05690f72d9972e52963cd90f0219528b87bcd469134ce86f8ebbe78d329a4d" gracePeriod=600 Jan 22 16:23:36 crc kubenswrapper[4825]: I0122 16:23:36.705944 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="3e05690f72d9972e52963cd90f0219528b87bcd469134ce86f8ebbe78d329a4d" exitCode=0 Jan 22 16:23:36 crc kubenswrapper[4825]: I0122 16:23:36.706434 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"3e05690f72d9972e52963cd90f0219528b87bcd469134ce86f8ebbe78d329a4d"} Jan 22 16:23:36 crc kubenswrapper[4825]: I0122 16:23:36.706467 4825 scope.go:117] "RemoveContainer" containerID="fb5ab7e5d4c908c1c1d4b4c4157cc7d7281c590d32562e932656db8733f12f80" Jan 22 16:23:37 crc kubenswrapper[4825]: I0122 16:23:37.718176 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" event={"ID":"4a21cb6a-1ef4-4510-8915-ed2e8024268f","Type":"ContainerStarted","Data":"dae42b551adefa5d6962d3c7d606c9274fbef331be446ec1699fdc9cbf24928c"} Jan 22 16:23:37 crc kubenswrapper[4825]: I0122 16:23:37.718721 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" event={"ID":"4a21cb6a-1ef4-4510-8915-ed2e8024268f","Type":"ContainerStarted","Data":"52a55df71bca772738d87e3a91c2b1d3d4b8eb04ed6f4b29e79e30ca5e5de43b"} Jan 22 16:23:37 crc kubenswrapper[4825]: I0122 16:23:37.723101 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea"} Jan 22 16:23:37 crc kubenswrapper[4825]: I0122 16:23:37.746430 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" podStartSLOduration=2.004311366 podStartE2EDuration="10.741961048s" podCreationTimestamp="2026-01-22 16:23:27 +0000 UTC" firstStartedPulling="2026-01-22 16:23:28.235675548 +0000 UTC m=+3554.997202458" lastFinishedPulling="2026-01-22 16:23:36.97332523 +0000 UTC m=+3563.734852140" observedRunningTime="2026-01-22 16:23:37.735088553 +0000 UTC m=+3564.496615463" watchObservedRunningTime="2026-01-22 16:23:37.741961048 +0000 UTC m=+3564.503487948" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.712576 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-b7kx8"] Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.714971 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.864703 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-host\") pod \"crc-debug-b7kx8\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.865109 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccnw\" (UniqueName: \"kubernetes.io/projected/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-kube-api-access-qccnw\") pod \"crc-debug-b7kx8\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.967299 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-host\") pod \"crc-debug-b7kx8\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.967397 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qccnw\" (UniqueName: \"kubernetes.io/projected/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-kube-api-access-qccnw\") pod \"crc-debug-b7kx8\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.967440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-host\") pod \"crc-debug-b7kx8\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:41 crc kubenswrapper[4825]: I0122 16:23:41.994550 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qccnw\" (UniqueName: \"kubernetes.io/projected/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-kube-api-access-qccnw\") pod \"crc-debug-b7kx8\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:42 crc kubenswrapper[4825]: I0122 16:23:42.034038 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:23:42 crc kubenswrapper[4825]: I0122 16:23:42.833076 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" event={"ID":"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9","Type":"ContainerStarted","Data":"90b2bdb0b7530702c1ee797655cff9935e9a1bb99bc0afb3771b9c0e6949db10"} Jan 22 16:23:58 crc kubenswrapper[4825]: I0122 16:23:58.310656 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" event={"ID":"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9","Type":"ContainerStarted","Data":"aa35ad3801a7cb80a7b0e17a4695a65608fe4dcb6a28e9748a8f2921d040543a"} Jan 22 16:23:58 crc kubenswrapper[4825]: I0122 16:23:58.337754 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" podStartSLOduration=1.965715594 podStartE2EDuration="17.337735987s" podCreationTimestamp="2026-01-22 16:23:41 +0000 UTC" firstStartedPulling="2026-01-22 16:23:42.085695383 +0000 UTC m=+3568.847222293" lastFinishedPulling="2026-01-22 16:23:57.457715776 +0000 UTC m=+3584.219242686" observedRunningTime="2026-01-22 16:23:58.333234049 +0000 UTC m=+3585.094760959" watchObservedRunningTime="2026-01-22 16:23:58.337735987 +0000 UTC m=+3585.099262897" Jan 22 16:24:56 crc kubenswrapper[4825]: I0122 16:24:56.235384 4825 generic.go:334] "Generic (PLEG): container finished" podID="5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" containerID="aa35ad3801a7cb80a7b0e17a4695a65608fe4dcb6a28e9748a8f2921d040543a" exitCode=0 Jan 22 16:24:56 crc kubenswrapper[4825]: I0122 16:24:56.236003 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" event={"ID":"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9","Type":"ContainerDied","Data":"aa35ad3801a7cb80a7b0e17a4695a65608fe4dcb6a28e9748a8f2921d040543a"} Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.395989 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.439202 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-b7kx8"] Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.447837 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-b7kx8"] Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.514215 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-host\") pod \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.514336 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-host" (OuterVolumeSpecName: "host") pod "5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" (UID: "5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.514391 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qccnw\" (UniqueName: \"kubernetes.io/projected/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-kube-api-access-qccnw\") pod \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\" (UID: \"5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9\") " Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.515471 4825 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-host\") on node \"crc\" DevicePath \"\"" Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.520740 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-kube-api-access-qccnw" (OuterVolumeSpecName: "kube-api-access-qccnw") pod "5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" (UID: "5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9"). InnerVolumeSpecName "kube-api-access-qccnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.530773 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" path="/var/lib/kubelet/pods/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9/volumes" Jan 22 16:24:57 crc kubenswrapper[4825]: I0122 16:24:57.619633 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qccnw\" (UniqueName: \"kubernetes.io/projected/5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9-kube-api-access-qccnw\") on node \"crc\" DevicePath \"\"" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.269360 4825 scope.go:117] "RemoveContainer" containerID="aa35ad3801a7cb80a7b0e17a4695a65608fe4dcb6a28e9748a8f2921d040543a" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.269698 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-b7kx8" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.676763 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-tbltl"] Jan 22 16:24:58 crc kubenswrapper[4825]: E0122 16:24:58.677331 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" containerName="container-00" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.677350 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" containerName="container-00" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.677640 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea91b1e-7daf-4dee-b404-4d8bb4c4b8a9" containerName="container-00" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.678495 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.759745 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nfgv\" (UniqueName: \"kubernetes.io/projected/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-kube-api-access-5nfgv\") pod \"crc-debug-tbltl\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.760173 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-host\") pod \"crc-debug-tbltl\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.862498 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nfgv\" (UniqueName: \"kubernetes.io/projected/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-kube-api-access-5nfgv\") pod \"crc-debug-tbltl\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.862694 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-host\") pod \"crc-debug-tbltl\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.863067 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-host\") pod \"crc-debug-tbltl\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.886067 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nfgv\" (UniqueName: \"kubernetes.io/projected/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-kube-api-access-5nfgv\") pod \"crc-debug-tbltl\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:58 crc kubenswrapper[4825]: I0122 16:24:58.997025 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:24:59 crc kubenswrapper[4825]: W0122 16:24:59.028019 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc7918b2_fd33_4b1b_8ebf_91bd5ec6a731.slice/crio-3a29bcc8a03680ef3f03cdef78afde8bd9ab6a7dcb5c6b6658488402d48ba567 WatchSource:0}: Error finding container 3a29bcc8a03680ef3f03cdef78afde8bd9ab6a7dcb5c6b6658488402d48ba567: Status 404 returned error can't find the container with id 3a29bcc8a03680ef3f03cdef78afde8bd9ab6a7dcb5c6b6658488402d48ba567 Jan 22 16:24:59 crc kubenswrapper[4825]: I0122 16:24:59.281952 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" event={"ID":"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731","Type":"ContainerStarted","Data":"3a29bcc8a03680ef3f03cdef78afde8bd9ab6a7dcb5c6b6658488402d48ba567"} Jan 22 16:25:00 crc kubenswrapper[4825]: I0122 16:25:00.296688 4825 generic.go:334] "Generic (PLEG): container finished" podID="fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" containerID="d8aa6a3b76f475b36f0e2acb3b89c90d09133a11d712aa3e74aeff02383bdb6a" exitCode=0 Jan 22 16:25:00 crc kubenswrapper[4825]: I0122 16:25:00.296790 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" event={"ID":"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731","Type":"ContainerDied","Data":"d8aa6a3b76f475b36f0e2acb3b89c90d09133a11d712aa3e74aeff02383bdb6a"} Jan 22 16:25:01 crc kubenswrapper[4825]: I0122 16:25:01.506490 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:25:01 crc kubenswrapper[4825]: I0122 16:25:01.711130 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-host\") pod \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " Jan 22 16:25:01 crc kubenswrapper[4825]: I0122 16:25:01.711240 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-host" (OuterVolumeSpecName: "host") pod "fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" (UID: "fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:25:01 crc kubenswrapper[4825]: I0122 16:25:01.711339 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nfgv\" (UniqueName: \"kubernetes.io/projected/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-kube-api-access-5nfgv\") pod \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\" (UID: \"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\") " Jan 22 16:25:01 crc kubenswrapper[4825]: I0122 16:25:01.712097 4825 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-host\") on node \"crc\" DevicePath \"\"" Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.322568 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" event={"ID":"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731","Type":"ContainerDied","Data":"3a29bcc8a03680ef3f03cdef78afde8bd9ab6a7dcb5c6b6658488402d48ba567"} Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.322632 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a29bcc8a03680ef3f03cdef78afde8bd9ab6a7dcb5c6b6658488402d48ba567" Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.322683 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.830643 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-kube-api-access-5nfgv" (OuterVolumeSpecName: "kube-api-access-5nfgv") pod "fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" (UID: "fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731"). InnerVolumeSpecName "kube-api-access-5nfgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.850257 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nfgv\" (UniqueName: \"kubernetes.io/projected/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731-kube-api-access-5nfgv\") on node \"crc\" DevicePath \"\"" Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.872861 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-must-gather-qxnm4/crc-debug-tbltl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:25:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"}],\\\"phase\\\":\\\"Succeeded\\\"}}\" for pod \"openshift-must-gather-qxnm4\"/\"crc-debug-tbltl\": pods \"crc-debug-tbltl\" not found" Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.899054 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-tbltl"] Jan 22 16:25:02 crc kubenswrapper[4825]: I0122 16:25:02.922501 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-tbltl"] Jan 22 16:25:03 crc kubenswrapper[4825]: I0122 16:25:03.531588 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" path="/var/lib/kubelet/pods/fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731/volumes" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.296726 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-jr6nf"] Jan 22 16:25:04 crc kubenswrapper[4825]: E0122 16:25:04.297199 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" containerName="container-00" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.297213 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" containerName="container-00" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.297442 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7918b2-fd33-4b1b-8ebf-91bd5ec6a731" containerName="container-00" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.298247 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.417736 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvpgz\" (UniqueName: \"kubernetes.io/projected/856e5f66-282f-414e-abf4-9a8d6ffb108e-kube-api-access-xvpgz\") pod \"crc-debug-jr6nf\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.417804 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856e5f66-282f-414e-abf4-9a8d6ffb108e-host\") pod \"crc-debug-jr6nf\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.570231 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvpgz\" (UniqueName: \"kubernetes.io/projected/856e5f66-282f-414e-abf4-9a8d6ffb108e-kube-api-access-xvpgz\") pod \"crc-debug-jr6nf\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.570305 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856e5f66-282f-414e-abf4-9a8d6ffb108e-host\") pod \"crc-debug-jr6nf\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.571587 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856e5f66-282f-414e-abf4-9a8d6ffb108e-host\") pod \"crc-debug-jr6nf\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.610376 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvpgz\" (UniqueName: \"kubernetes.io/projected/856e5f66-282f-414e-abf4-9a8d6ffb108e-kube-api-access-xvpgz\") pod \"crc-debug-jr6nf\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:04 crc kubenswrapper[4825]: I0122 16:25:04.615145 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:05 crc kubenswrapper[4825]: I0122 16:25:05.594437 4825 generic.go:334] "Generic (PLEG): container finished" podID="856e5f66-282f-414e-abf4-9a8d6ffb108e" containerID="54b854376d1b470a5b45d046b9b0b8a38ff14ab8c2d397b351a727d7d152a72a" exitCode=0 Jan 22 16:25:05 crc kubenswrapper[4825]: I0122 16:25:05.594595 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" event={"ID":"856e5f66-282f-414e-abf4-9a8d6ffb108e","Type":"ContainerDied","Data":"54b854376d1b470a5b45d046b9b0b8a38ff14ab8c2d397b351a727d7d152a72a"} Jan 22 16:25:05 crc kubenswrapper[4825]: I0122 16:25:05.594889 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" event={"ID":"856e5f66-282f-414e-abf4-9a8d6ffb108e","Type":"ContainerStarted","Data":"fdd6fa7f1a1e5eaed21f726015491109431117acb343934a3c5084230345d11b"} Jan 22 16:25:05 crc kubenswrapper[4825]: I0122 16:25:05.636744 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-jr6nf"] Jan 22 16:25:05 crc kubenswrapper[4825]: I0122 16:25:05.648705 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qxnm4/crc-debug-jr6nf"] Jan 22 16:25:06 crc kubenswrapper[4825]: I0122 16:25:06.715755 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:06 crc kubenswrapper[4825]: I0122 16:25:06.908226 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856e5f66-282f-414e-abf4-9a8d6ffb108e-host\") pod \"856e5f66-282f-414e-abf4-9a8d6ffb108e\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " Jan 22 16:25:06 crc kubenswrapper[4825]: I0122 16:25:06.908388 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvpgz\" (UniqueName: \"kubernetes.io/projected/856e5f66-282f-414e-abf4-9a8d6ffb108e-kube-api-access-xvpgz\") pod \"856e5f66-282f-414e-abf4-9a8d6ffb108e\" (UID: \"856e5f66-282f-414e-abf4-9a8d6ffb108e\") " Jan 22 16:25:06 crc kubenswrapper[4825]: I0122 16:25:06.908881 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856e5f66-282f-414e-abf4-9a8d6ffb108e-host" (OuterVolumeSpecName: "host") pod "856e5f66-282f-414e-abf4-9a8d6ffb108e" (UID: "856e5f66-282f-414e-abf4-9a8d6ffb108e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:25:06 crc kubenswrapper[4825]: I0122 16:25:06.916018 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856e5f66-282f-414e-abf4-9a8d6ffb108e-kube-api-access-xvpgz" (OuterVolumeSpecName: "kube-api-access-xvpgz") pod "856e5f66-282f-414e-abf4-9a8d6ffb108e" (UID: "856e5f66-282f-414e-abf4-9a8d6ffb108e"). InnerVolumeSpecName "kube-api-access-xvpgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:25:07 crc kubenswrapper[4825]: I0122 16:25:07.011027 4825 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856e5f66-282f-414e-abf4-9a8d6ffb108e-host\") on node \"crc\" DevicePath \"\"" Jan 22 16:25:07 crc kubenswrapper[4825]: I0122 16:25:07.011340 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvpgz\" (UniqueName: \"kubernetes.io/projected/856e5f66-282f-414e-abf4-9a8d6ffb108e-kube-api-access-xvpgz\") on node \"crc\" DevicePath \"\"" Jan 22 16:25:07 crc kubenswrapper[4825]: I0122 16:25:07.600973 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856e5f66-282f-414e-abf4-9a8d6ffb108e" path="/var/lib/kubelet/pods/856e5f66-282f-414e-abf4-9a8d6ffb108e/volumes" Jan 22 16:25:07 crc kubenswrapper[4825]: I0122 16:25:07.613933 4825 scope.go:117] "RemoveContainer" containerID="54b854376d1b470a5b45d046b9b0b8a38ff14ab8c2d397b351a727d7d152a72a" Jan 22 16:25:07 crc kubenswrapper[4825]: I0122 16:25:07.614075 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/crc-debug-jr6nf" Jan 22 16:25:38 crc kubenswrapper[4825]: I0122 16:25:38.979591 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_1f7994d5-5cc8-4830-bcd1-9f63b9109a09/init-config-reloader/0.log" Jan 22 16:25:39 crc kubenswrapper[4825]: I0122 16:25:39.213608 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_1f7994d5-5cc8-4830-bcd1-9f63b9109a09/alertmanager/0.log" Jan 22 16:25:39 crc kubenswrapper[4825]: I0122 16:25:39.245700 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_1f7994d5-5cc8-4830-bcd1-9f63b9109a09/init-config-reloader/0.log" Jan 22 16:25:39 crc kubenswrapper[4825]: I0122 16:25:39.325063 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_1f7994d5-5cc8-4830-bcd1-9f63b9109a09/config-reloader/0.log" Jan 22 16:25:39 crc kubenswrapper[4825]: I0122 16:25:39.527955 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-647867566-jbd62_4cc36209-9086-4104-ac1c-0483ff8f05e6/barbican-api/0.log" Jan 22 16:25:39 crc kubenswrapper[4825]: I0122 16:25:39.801749 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-647867566-jbd62_4cc36209-9086-4104-ac1c-0483ff8f05e6/barbican-api-log/0.log" Jan 22 16:25:39 crc kubenswrapper[4825]: I0122 16:25:39.848837 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-76545dccfd-vqph7_1ccd62bc-d183-4918-91d6-fd5be08f6dc1/barbican-keystone-listener/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.057340 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-76545dccfd-vqph7_1ccd62bc-d183-4918-91d6-fd5be08f6dc1/barbican-keystone-listener-log/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.082246 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-894b498b5-mnnlr_349354a1-c3d7-4f6a-b85a-3a7b490b98da/barbican-worker/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.159306 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-894b498b5-mnnlr_349354a1-c3d7-4f6a-b85a-3a7b490b98da/barbican-worker-log/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.302234 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x97q6_793f46d5-06cd-4273-9905-f235c6bc4f72/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.448502 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3a2eadc4-a314-4c54-bdce-455b3697e4ad/ceilometer-central-agent/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.549071 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3a2eadc4-a314-4c54-bdce-455b3697e4ad/ceilometer-notification-agent/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.604131 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3a2eadc4-a314-4c54-bdce-455b3697e4ad/proxy-httpd/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.738362 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3a2eadc4-a314-4c54-bdce-455b3697e4ad/sg-core/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.839012 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c8f0065f-b8fd-4a5d-a098-2db018daf9ee/cinder-api/0.log" Jan 22 16:25:40 crc kubenswrapper[4825]: I0122 16:25:40.848614 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c8f0065f-b8fd-4a5d-a098-2db018daf9ee/cinder-api-log/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.088082 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_efc4890e-42b2-4bc7-98fa-40e22ecc24ad/probe/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.154918 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_efc4890e-42b2-4bc7-98fa-40e22ecc24ad/cinder-scheduler/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.299506 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_a099ee6b-e91c-4017-92ec-ad9289342d56/cloudkitty-api-log/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.388687 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_a099ee6b-e91c-4017-92ec-ad9289342d56/cloudkitty-api/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.486760 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_d2333364-70e2-4c7a-933e-142e0ebed301/loki-compactor/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.659030 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-j9kkt_dba34c46-ef4e-4315-8b1d-1f1946e329a7/loki-distributor/0.log" Jan 22 16:25:41 crc kubenswrapper[4825]: I0122 16:25:41.810482 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-5v2wm_c68c2d8f-a469-4ce1-8b16-f7afa18f9cfc/gateway/0.log" Jan 22 16:25:42 crc kubenswrapper[4825]: I0122 16:25:42.415814 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-g56j8_d0112c91-a6fe-4b93-aff9-49f108a64603/gateway/0.log" Jan 22 16:25:42 crc kubenswrapper[4825]: I0122 16:25:42.483860 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_2904ce28-f3f4-41ad-8612-36e924ab3d32/loki-index-gateway/0.log" Jan 22 16:25:42 crc kubenswrapper[4825]: I0122 16:25:42.818560 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_57cba631-503b-4795-8463-3d1e50957d58/loki-ingester/0.log" Jan 22 16:25:43 crc kubenswrapper[4825]: I0122 16:25:43.209507 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-r4krs_14563680-8847-4136-9955-836dd8331930/loki-query-frontend/0.log" Jan 22 16:25:43 crc kubenswrapper[4825]: I0122 16:25:43.266296 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-8vxr8_28ded780-a2df-4624-807e-2426859b0a95/loki-querier/0.log" Jan 22 16:25:43 crc kubenswrapper[4825]: I0122 16:25:43.763566 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7jcbs_c37b521d-eed3-4bfc-895d-f8349240a58b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:43 crc kubenswrapper[4825]: I0122 16:25:43.892334 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m2b52_90cd4aa4-003a-423a-a15b-1f0321375a34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:44 crc kubenswrapper[4825]: I0122 16:25:44.155095 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-4w7hj_d69d25bc-5530-4482-9394-2d89c1b92f5a/init/0.log" Jan 22 16:25:44 crc kubenswrapper[4825]: I0122 16:25:44.461909 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-4w7hj_d69d25bc-5530-4482-9394-2d89c1b92f5a/dnsmasq-dns/0.log" Jan 22 16:25:44 crc kubenswrapper[4825]: I0122 16:25:44.488753 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4fqfc_95d9e491-a6ee-43ee-bdee-a94b23e1e510/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:44 crc kubenswrapper[4825]: I0122 16:25:44.496392 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-4w7hj_d69d25bc-5530-4482-9394-2d89c1b92f5a/init/0.log" Jan 22 16:25:45 crc kubenswrapper[4825]: I0122 16:25:45.279939 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fa216742-9142-43e8-a320-47c91f44da7e/glance-httpd/0.log" Jan 22 16:25:45 crc kubenswrapper[4825]: I0122 16:25:45.322242 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fa216742-9142-43e8-a320-47c91f44da7e/glance-log/0.log" Jan 22 16:25:45 crc kubenswrapper[4825]: I0122 16:25:45.492134 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_21a15af5-89b0-4b01-818e-318d7930e7cf/glance-httpd/0.log" Jan 22 16:25:45 crc kubenswrapper[4825]: I0122 16:25:45.555227 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_21a15af5-89b0-4b01-818e-318d7930e7cf/glance-log/0.log" Jan 22 16:25:45 crc kubenswrapper[4825]: I0122 16:25:45.669517 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-52jcm_7fee2632-6167-4d03-adb1-b103201abb59/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:45 crc kubenswrapper[4825]: I0122 16:25:45.790755 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4544r_d5f55dc4-ff3c-456c-8d34-b5143b856f0a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:46 crc kubenswrapper[4825]: I0122 16:25:46.400194 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484961-4jx79_b7fdc09c-d5ae-4def-a124-b7e5e8a0b23f/keystone-cron/0.log" Jan 22 16:25:46 crc kubenswrapper[4825]: I0122 16:25:46.682283 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a64e8f75-44a3-495b-bd22-94db8fd34687/kube-state-metrics/0.log" Jan 22 16:25:46 crc kubenswrapper[4825]: I0122 16:25:46.723513 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bfd68784d-7vgv2_e2f8bb1f-7234-465d-96ba-cd26f508d35a/keystone-api/0.log" Jan 22 16:25:46 crc kubenswrapper[4825]: I0122 16:25:46.834786 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hxnfx_4289f922-fcbd-4485-8fca-83f858eb39a2/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:47 crc kubenswrapper[4825]: I0122 16:25:47.643110 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fdbbbd487-qbcwc_d0008df0-93d9-43ac-b31b-3eed1b711628/neutron-api/0.log" Jan 22 16:25:47 crc kubenswrapper[4825]: I0122 16:25:47.986724 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fdbbbd487-qbcwc_d0008df0-93d9-43ac-b31b-3eed1b711628/neutron-httpd/0.log" Jan 22 16:25:48 crc kubenswrapper[4825]: I0122 16:25:48.286083 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-46ctv_6300fe1a-799f-43a4-943b-b62dc552c5fb/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:48 crc kubenswrapper[4825]: I0122 16:25:48.908171 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_151dca33-da19-4a32-948e-ec8bc6d14829/nova-api-log/0.log" Jan 22 16:25:49 crc kubenswrapper[4825]: I0122 16:25:49.077338 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_151dca33-da19-4a32-948e-ec8bc6d14829/nova-api-api/0.log" Jan 22 16:25:49 crc kubenswrapper[4825]: I0122 16:25:49.501718 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a97addeb-cbff-4929-9a25-e1a5de50a83d/nova-cell0-conductor-conductor/0.log" Jan 22 16:25:49 crc kubenswrapper[4825]: I0122 16:25:49.650913 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9fadfc09-61e5-4bda-b09d-2bd3d609dffb/nova-cell1-conductor-conductor/0.log" Jan 22 16:25:49 crc kubenswrapper[4825]: I0122 16:25:49.764650 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bd2d3808-2332-4ae9-b3c0-2e58ba48437c/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 16:25:50 crc kubenswrapper[4825]: I0122 16:25:50.111324 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nc2g4_e461766f-09e2-4b85-87e7-9e5048f701cd/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:50 crc kubenswrapper[4825]: I0122 16:25:50.872157 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e63e3f9f-d983-4643-b3be-804cb489ac96/nova-metadata-log/0.log" Jan 22 16:25:51 crc kubenswrapper[4825]: I0122 16:25:51.876371 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ac2aec93-3b95-48dd-8799-49e89387ab25/nova-scheduler-scheduler/0.log" Jan 22 16:25:52 crc kubenswrapper[4825]: I0122 16:25:52.127757 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c562767d-1bda-4a9f-beec-5629395ca332/mysql-bootstrap/0.log" Jan 22 16:25:52 crc kubenswrapper[4825]: I0122 16:25:52.331382 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c562767d-1bda-4a9f-beec-5629395ca332/mysql-bootstrap/0.log" Jan 22 16:25:52 crc kubenswrapper[4825]: I0122 16:25:52.421881 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c562767d-1bda-4a9f-beec-5629395ca332/galera/0.log" Jan 22 16:25:52 crc kubenswrapper[4825]: I0122 16:25:52.635648 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e63e3f9f-d983-4643-b3be-804cb489ac96/nova-metadata-metadata/0.log" Jan 22 16:25:52 crc kubenswrapper[4825]: I0122 16:25:52.662034 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6b65dd5d-6fe0-4cec-a8d4-d05b099607af/mysql-bootstrap/0.log" Jan 22 16:25:53 crc kubenswrapper[4825]: I0122 16:25:53.231393 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6b65dd5d-6fe0-4cec-a8d4-d05b099607af/galera/0.log" Jan 22 16:25:53 crc kubenswrapper[4825]: I0122 16:25:53.241813 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6b65dd5d-6fe0-4cec-a8d4-d05b099607af/mysql-bootstrap/0.log" Jan 22 16:25:53 crc kubenswrapper[4825]: I0122 16:25:53.481030 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4709fedd-37c2-4afa-b34d-347f46586c55/openstackclient/0.log" Jan 22 16:25:53 crc kubenswrapper[4825]: I0122 16:25:53.559200 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mcnvt_b68d7d4f-a421-461d-8d1c-c2d9d9a8aac7/openstack-network-exporter/0.log" Jan 22 16:25:53 crc kubenswrapper[4825]: I0122 16:25:53.810485 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bkgcs_b08ffe2b-2e43-437b-beb1-2eb436baa4ec/ovsdb-server-init/0.log" Jan 22 16:25:54 crc kubenswrapper[4825]: I0122 16:25:54.315769 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bkgcs_b08ffe2b-2e43-437b-beb1-2eb436baa4ec/ovs-vswitchd/0.log" Jan 22 16:25:54 crc kubenswrapper[4825]: I0122 16:25:54.336013 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bkgcs_b08ffe2b-2e43-437b-beb1-2eb436baa4ec/ovsdb-server-init/0.log" Jan 22 16:25:54 crc kubenswrapper[4825]: I0122 16:25:54.365095 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bkgcs_b08ffe2b-2e43-437b-beb1-2eb436baa4ec/ovsdb-server/0.log" Jan 22 16:25:54 crc kubenswrapper[4825]: I0122 16:25:54.589421 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-snszk_306a03b3-2cdb-494a-ab5b-51d80fe3586c/ovn-controller/0.log" Jan 22 16:25:55 crc kubenswrapper[4825]: I0122 16:25:55.306825 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bb5mv_3b4f4d3b-5e0b-43dd-9b2f-d706bd69c8c8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:55 crc kubenswrapper[4825]: I0122 16:25:55.337451 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9/openstack-network-exporter/0.log" Jan 22 16:25:55 crc kubenswrapper[4825]: I0122 16:25:55.637307 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c5cc7d3-edf6-4f93-ba54-6dfbc2acefe9/ovn-northd/0.log" Jan 22 16:25:55 crc kubenswrapper[4825]: I0122 16:25:55.649224 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4d975cad-bc38-442f-acdd-0c8fa4f3b429/openstack-network-exporter/0.log" Jan 22 16:25:55 crc kubenswrapper[4825]: I0122 16:25:55.896645 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4d975cad-bc38-442f-acdd-0c8fa4f3b429/ovsdbserver-nb/0.log" Jan 22 16:25:55 crc kubenswrapper[4825]: I0122 16:25:55.900168 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_892f29f1-29c4-4f1d-83af-660bf2983766/openstack-network-exporter/0.log" Jan 22 16:25:56 crc kubenswrapper[4825]: I0122 16:25:56.401887 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_892f29f1-29c4-4f1d-83af-660bf2983766/ovsdbserver-sb/0.log" Jan 22 16:25:56 crc kubenswrapper[4825]: I0122 16:25:56.618151 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-679b6799cd-9xsrq_9634c15f-16c7-43e2-877b-934fa9467de7/placement-api/0.log" Jan 22 16:25:56 crc kubenswrapper[4825]: I0122 16:25:56.851840 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-679b6799cd-9xsrq_9634c15f-16c7-43e2-877b-934fa9467de7/placement-log/0.log" Jan 22 16:25:56 crc kubenswrapper[4825]: I0122 16:25:56.910482 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_667e755d-b6f5-4280-9640-a7a893684b7f/init-config-reloader/0.log" Jan 22 16:25:57 crc kubenswrapper[4825]: I0122 16:25:57.165173 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_667e755d-b6f5-4280-9640-a7a893684b7f/init-config-reloader/0.log" Jan 22 16:25:57 crc kubenswrapper[4825]: I0122 16:25:57.208592 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_667e755d-b6f5-4280-9640-a7a893684b7f/prometheus/0.log" Jan 22 16:25:57 crc kubenswrapper[4825]: I0122 16:25:57.253526 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_667e755d-b6f5-4280-9640-a7a893684b7f/config-reloader/0.log" Jan 22 16:25:57 crc kubenswrapper[4825]: I0122 16:25:57.423597 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_667e755d-b6f5-4280-9640-a7a893684b7f/thanos-sidecar/0.log" Jan 22 16:25:57 crc kubenswrapper[4825]: I0122 16:25:57.622787 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_efaf42df-9ed2-41b2-b660-bacb51298b2c/setup-container/0.log" Jan 22 16:25:57 crc kubenswrapper[4825]: I0122 16:25:57.967498 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_efaf42df-9ed2-41b2-b660-bacb51298b2c/setup-container/0.log" Jan 22 16:25:58 crc kubenswrapper[4825]: I0122 16:25:58.023584 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_efaf42df-9ed2-41b2-b660-bacb51298b2c/rabbitmq/0.log" Jan 22 16:25:58 crc kubenswrapper[4825]: I0122 16:25:58.171724 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_80f35c3b-7247-4a39-8562-d68602381fa1/setup-container/0.log" Jan 22 16:25:58 crc kubenswrapper[4825]: I0122 16:25:58.465311 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_80f35c3b-7247-4a39-8562-d68602381fa1/setup-container/0.log" Jan 22 16:25:59 crc kubenswrapper[4825]: I0122 16:25:59.068111 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_80f35c3b-7247-4a39-8562-d68602381fa1/rabbitmq/0.log" Jan 22 16:25:59 crc kubenswrapper[4825]: I0122 16:25:59.191924 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fctp2_deb0bc07-ef7e-49c2-8df1-4c7d0e2e28d1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:59 crc kubenswrapper[4825]: I0122 16:25:59.371513 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-n6vtm_c754afdb-51d7-442c-a0eb-baf47399beb7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:59 crc kubenswrapper[4825]: I0122 16:25:59.654631 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6stp6_07e8bf5e-6706-4987-8447-e918785d8f38/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:25:59 crc kubenswrapper[4825]: I0122 16:25:59.706004 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-v49kl_36cb581a-e6c1-479e-ad47-efcba7182aef/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:26:00 crc kubenswrapper[4825]: I0122 16:26:00.570256 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-phjk9_7aa5c4d3-6bba-4019-a805-182fc8fa4efa/ssh-known-hosts-edpm-deployment/0.log" Jan 22 16:26:00 crc kubenswrapper[4825]: I0122 16:26:00.951812 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-789468d499-5b789_562fb5cd-164c-4308-851d-88b6afd1e3c2/proxy-httpd/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.077780 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-789468d499-5b789_562fb5cd-164c-4308-851d-88b6afd1e3c2/proxy-server/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.342202 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-z6dh6_4cc9ba42-f6cd-48ac-b240-d2d764abe4a2/swift-ring-rebalance/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.521506 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/account-auditor/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.600530 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/account-reaper/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.742575 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/account-server/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.775737 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/account-replicator/0.log" Jan 22 16:26:01 crc kubenswrapper[4825]: I0122 16:26:01.822343 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/container-auditor/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.119919 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/container-replicator/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.239330 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/container-server/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.240073 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/container-updater/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.323049 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/object-auditor/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.440653 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/object-expirer/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.503436 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/object-replicator/0.log" Jan 22 16:26:02 crc kubenswrapper[4825]: I0122 16:26:02.522502 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/object-server/0.log" Jan 22 16:26:03 crc kubenswrapper[4825]: I0122 16:26:03.148782 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/swift-recon-cron/0.log" Jan 22 16:26:03 crc kubenswrapper[4825]: I0122 16:26:03.295358 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/rsync/0.log" Jan 22 16:26:03 crc kubenswrapper[4825]: I0122 16:26:03.343627 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_62f00afd-c39a-409f-ba5e-b5474959717b/object-updater/0.log" Jan 22 16:26:03 crc kubenswrapper[4825]: I0122 16:26:03.670782 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xh7fh_803d8c56-ded5-4e3c-bf48-d5eb0b623dfe/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:26:04 crc kubenswrapper[4825]: I0122 16:26:04.021612 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d022aa13-5f44-4fc9-8796-f86c575836ce/tempest-tests-tempest-tests-runner/0.log" Jan 22 16:26:04 crc kubenswrapper[4825]: I0122 16:26:04.071851 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_cdf78686-853e-48db-b28b-19a3494c7296/test-operator-logs-container/0.log" Jan 22 16:26:04 crc kubenswrapper[4825]: I0122 16:26:04.311999 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lfqgk_0b5da8d0-15a3-4f88-8aa3-57f5fa886633/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 16:26:05 crc kubenswrapper[4825]: I0122 16:26:05.541866 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:26:05 crc kubenswrapper[4825]: I0122 16:26:05.541919 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:26:08 crc kubenswrapper[4825]: I0122 16:26:08.186184 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_d1a2b167-da42-48fa-9e6b-0038aa5a36ce/cloudkitty-proc/0.log" Jan 22 16:26:10 crc kubenswrapper[4825]: I0122 16:26:10.432710 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b85d0578-2876-4355-b5f7-7412f59eb278/memcached/0.log" Jan 22 16:26:35 crc kubenswrapper[4825]: I0122 16:26:35.541656 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:26:35 crc kubenswrapper[4825]: I0122 16:26:35.542290 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:26:39 crc kubenswrapper[4825]: I0122 16:26:39.787070 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/util/0.log" Jan 22 16:26:39 crc kubenswrapper[4825]: I0122 16:26:39.987268 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/util/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.021135 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/pull/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.134684 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/pull/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.517740 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/pull/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.524775 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/util/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.623573 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9a7c3899d0cd9a3fbfc49380996a4daecb22a8a63d65783d558499e7cftcz46_e45ace87-f1a4-47c4-9582-c56942dee924/extract/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.903008 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-wtvfh_a2489602-cadb-4351-96b5-5727dbeb521d/manager/0.log" Jan 22 16:26:40 crc kubenswrapper[4825]: I0122 16:26:40.906271 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-bj5dt_d5ca055b-760a-4356-a32e-4b2358edbe73/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.043017 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-p8542_fb806cbf-2796-48f0-980a-5ab87a967cc7/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.170113 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-swmt9_fc586b00-c7d2-47f7-bb91-8d9740048538/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.243569 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fntv2_15cb9f6b-3766-4ac5-8272-cec4434eebcd/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.340224 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-j9zxq_e3094c32-0018-4519-8606-e0e3a3420425/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.549489 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-w2xrz_8739e7a5-b09f-4908-b1e8-893e06e8c0d5/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.733839 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-c6r4m_d369dfbc-d830-4221-aca1-386666bca9a7/manager/0.log" Jan 22 16:26:41 crc kubenswrapper[4825]: I0122 16:26:41.973581 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4j7zw_aa7fc7cf-0d1f-4caa-905b-add971620c70/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.018577 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-8jv66_02608e48-40b1-4179-ac70-99aad7341dbf/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.208742 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-hxvkd_db3cdd8c-dec4-42cc-bb80-f29321423ab7/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.304735 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tfd8w_dd5e1412-572a-4014-ae4b-69415ab62800/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.532006 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-5js5c_f1b06b68-e7ff-45d0-aaf2-7ee63d7d4ec5/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.541658 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-j6kq4_886df838-09b9-423d-a8b6-3a5d428a0d30/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.802278 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854j6c6w_10a92b09-1701-49e1-bb4c-e715ddf9ff4f/manager/0.log" Jan 22 16:26:42 crc kubenswrapper[4825]: I0122 16:26:42.851850 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7bf86bd88b-knxbj_92157fff-bfe1-4bcb-ba7d-617b72c1781c/operator/0.log" Jan 22 16:26:43 crc kubenswrapper[4825]: I0122 16:26:43.253858 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xcglg_28529c7d-badd-40d9-a46d-2b2632765ce6/registry-server/0.log" Jan 22 16:26:43 crc kubenswrapper[4825]: I0122 16:26:43.512774 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-9c9tx_c07e4778-10c8-4074-ac87-f6088891be7c/manager/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.089108 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-5wxzj_24600886-dd49-445f-bdcd-ed919ec8fd02/manager/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.161105 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-848467994c-vvklq_42cd1b7d-1439-4cf8-aaf4-5e665128a25e/manager/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.184390 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-r7vn5_83e9f24f-c02c-4bbc-92ac-7f1e5d42f00c/operator/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.425097 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bmm4t_7042acfa-6435-4bfc-812b-45bbb2523cf9/manager/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.655126 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-86476_068b0c83-8bef-4835-871c-317c62e88f50/manager/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.765063 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-4ghsm_022dd12c-03b8-43c4-92db-1a7654fcffeb/manager/0.log" Jan 22 16:26:44 crc kubenswrapper[4825]: I0122 16:26:44.803089 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fbc679d4d-92fqf_059620d7-dcad-4c62-804b-f92566f0fd85/manager/0.log" Jan 22 16:27:05 crc kubenswrapper[4825]: I0122 16:27:05.544232 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:27:05 crc kubenswrapper[4825]: I0122 16:27:05.544762 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:27:05 crc kubenswrapper[4825]: I0122 16:27:05.545599 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:27:05 crc kubenswrapper[4825]: I0122 16:27:05.546925 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:27:05 crc kubenswrapper[4825]: I0122 16:27:05.547061 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" gracePeriod=600 Jan 22 16:27:06 crc kubenswrapper[4825]: I0122 16:27:06.145184 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" exitCode=0 Jan 22 16:27:06 crc kubenswrapper[4825]: I0122 16:27:06.145271 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea"} Jan 22 16:27:06 crc kubenswrapper[4825]: I0122 16:27:06.145529 4825 scope.go:117] "RemoveContainer" containerID="3e05690f72d9972e52963cd90f0219528b87bcd469134ce86f8ebbe78d329a4d" Jan 22 16:27:06 crc kubenswrapper[4825]: E0122 16:27:06.174597 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:27:07 crc kubenswrapper[4825]: I0122 16:27:07.158602 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:27:07 crc kubenswrapper[4825]: E0122 16:27:07.159034 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:27:09 crc kubenswrapper[4825]: I0122 16:27:09.995647 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xhfx7_095c6359-a33b-4176-becb-f60758bb28b4/control-plane-machine-set-operator/0.log" Jan 22 16:27:10 crc kubenswrapper[4825]: I0122 16:27:10.165163 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-82rs5_6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73/kube-rbac-proxy/0.log" Jan 22 16:27:10 crc kubenswrapper[4825]: I0122 16:27:10.217367 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-82rs5_6f00a8ac-2c26-4fd9-9ca0-e4e9e6423b73/machine-api-operator/0.log" Jan 22 16:27:21 crc kubenswrapper[4825]: I0122 16:27:21.517090 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:27:21 crc kubenswrapper[4825]: E0122 16:27:21.517742 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:27:26 crc kubenswrapper[4825]: I0122 16:27:26.329515 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8rcmg_25d0f3c8-a90d-468d-97bf-61ce52c80b40/cert-manager-controller/0.log" Jan 22 16:27:26 crc kubenswrapper[4825]: I0122 16:27:26.531175 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mvpnf_a55ed53b-731c-41a4-8f41-8baa28baf731/cert-manager-cainjector/0.log" Jan 22 16:27:26 crc kubenswrapper[4825]: I0122 16:27:26.586524 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-hf8bj_f957c7cc-d8d5-435b-976d-3fe554887cc0/cert-manager-webhook/0.log" Jan 22 16:27:33 crc kubenswrapper[4825]: I0122 16:27:33.524406 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:27:33 crc kubenswrapper[4825]: E0122 16:27:33.525050 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:27:42 crc kubenswrapper[4825]: I0122 16:27:42.335201 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-fxcqq_daec0120-0078-4bfc-a484-c8e25bce75cc/nmstate-console-plugin/0.log" Jan 22 16:27:42 crc kubenswrapper[4825]: I0122 16:27:42.584183 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wbmlw_cb155748-60ea-4ba4-8add-144027528478/nmstate-handler/0.log" Jan 22 16:27:42 crc kubenswrapper[4825]: I0122 16:27:42.756878 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2vr9x_61e185b2-1b85-42f6-be2f-7e2d9d698453/kube-rbac-proxy/0.log" Jan 22 16:27:42 crc kubenswrapper[4825]: I0122 16:27:42.773842 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2vr9x_61e185b2-1b85-42f6-be2f-7e2d9d698453/nmstate-metrics/0.log" Jan 22 16:27:42 crc kubenswrapper[4825]: I0122 16:27:42.816816 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bjdhf_072bae22-8fb1-4abb-ab89-da32c2282f11/nmstate-operator/0.log" Jan 22 16:27:43 crc kubenswrapper[4825]: I0122 16:27:43.002218 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-nncrc_cb73c1aa-4e2b-40fd-aebe-21d16e031e60/nmstate-webhook/0.log" Jan 22 16:27:44 crc kubenswrapper[4825]: I0122 16:27:44.517945 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:27:44 crc kubenswrapper[4825]: E0122 16:27:44.518509 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:27:57 crc kubenswrapper[4825]: I0122 16:27:57.518817 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:27:57 crc kubenswrapper[4825]: E0122 16:27:57.519667 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:28:01 crc kubenswrapper[4825]: I0122 16:28:01.076760 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5867677bb9-kgwmt_52f9b085-39e8-4a44-93c0-be3d751cb667/kube-rbac-proxy/0.log" Jan 22 16:28:01 crc kubenswrapper[4825]: I0122 16:28:01.152282 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5867677bb9-kgwmt_52f9b085-39e8-4a44-93c0-be3d751cb667/manager/0.log" Jan 22 16:28:11 crc kubenswrapper[4825]: I0122 16:28:11.517872 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:28:11 crc kubenswrapper[4825]: E0122 16:28:11.518537 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:28:18 crc kubenswrapper[4825]: I0122 16:28:18.185749 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x6k4r_1d4ea96c-02ed-4924-bdc0-0fa0a9932467/prometheus-operator/0.log" Jan 22 16:28:18 crc kubenswrapper[4825]: I0122 16:28:18.259155 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1/prometheus-operator-admission-webhook/0.log" Jan 22 16:28:18 crc kubenswrapper[4825]: I0122 16:28:18.495563 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_290bb62e-9a41-4a2a-886c-803ffa414dce/prometheus-operator-admission-webhook/0.log" Jan 22 16:28:18 crc kubenswrapper[4825]: I0122 16:28:18.589339 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-k58sz_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7/operator/0.log" Jan 22 16:28:18 crc kubenswrapper[4825]: I0122 16:28:18.704542 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8wflx_2df2d5aa-6948-42a3-8ba0-7eedffb87020/perses-operator/0.log" Jan 22 16:28:25 crc kubenswrapper[4825]: I0122 16:28:25.517133 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:28:25 crc kubenswrapper[4825]: E0122 16:28:25.517947 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.431794 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-t622w_7bed4152-970e-428e-a8bd-21fe165bde92/kube-rbac-proxy/0.log" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.472645 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-t622w_7bed4152-970e-428e-a8bd-21fe165bde92/controller/0.log" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.665152 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-frr-files/0.log" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.817866 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-frr-files/0.log" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.834770 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-reloader/0.log" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.845589 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-metrics/0.log" Jan 22 16:28:38 crc kubenswrapper[4825]: I0122 16:28:38.862449 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-reloader/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.274722 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-reloader/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.378007 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-frr-files/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.384433 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-metrics/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.508697 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-metrics/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.517763 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:28:39 crc kubenswrapper[4825]: E0122 16:28:39.518167 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.583650 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-metrics/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.614592 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-frr-files/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.635755 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/cp-reloader/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.711379 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/controller/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.832280 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/kube-rbac-proxy/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.836016 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/frr-metrics/0.log" Jan 22 16:28:39 crc kubenswrapper[4825]: I0122 16:28:39.875397 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/kube-rbac-proxy-frr/0.log" Jan 22 16:28:40 crc kubenswrapper[4825]: I0122 16:28:40.018649 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/reloader/0.log" Jan 22 16:28:40 crc kubenswrapper[4825]: I0122 16:28:40.154811 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zk72c_148811c4-5c9f-4c58-86f6-df32772b3fb9/frr-k8s-webhook-server/0.log" Jan 22 16:28:40 crc kubenswrapper[4825]: I0122 16:28:40.459634 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59f887c8c5-jc6l7_6f10e107-2124-422f-9201-d516620b0919/manager/0.log" Jan 22 16:28:40 crc kubenswrapper[4825]: I0122 16:28:40.834529 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-f8d94c798-ms78l_17f8943a-3372-4216-aa96-9e61c5e8110c/webhook-server/0.log" Jan 22 16:28:40 crc kubenswrapper[4825]: I0122 16:28:40.880306 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lk5df_6b567ea4-6df9-4c62-8caf-c8bb77aae0b7/kube-rbac-proxy/0.log" Jan 22 16:28:41 crc kubenswrapper[4825]: I0122 16:28:41.575924 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kt4gs_0773281d-f402-46e0-ae19-32a82824046b/frr/0.log" Jan 22 16:28:41 crc kubenswrapper[4825]: I0122 16:28:41.656856 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lk5df_6b567ea4-6df9-4c62-8caf-c8bb77aae0b7/speaker/0.log" Jan 22 16:28:54 crc kubenswrapper[4825]: I0122 16:28:54.517371 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:28:54 crc kubenswrapper[4825]: E0122 16:28:54.518808 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.072480 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/util/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.252708 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/pull/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.265573 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/util/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.280837 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/pull/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.446898 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/pull/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.454838 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/extract/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.517298 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcq5plt_20a233ad-0f91-4e20-806f-84cdef936bc8/util/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.677061 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/util/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.949838 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/util/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.969881 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/pull/0.log" Jan 22 16:28:57 crc kubenswrapper[4825]: I0122 16:28:57.989844 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/pull/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.167770 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/pull/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.200869 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/extract/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.248335 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773kk7kh_6070a053-75b3-46a8-9e38-b6a1ad5324a8/util/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.465526 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/util/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.619140 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/pull/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.631860 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/util/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.634248 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/pull/0.log" Jan 22 16:28:58 crc kubenswrapper[4825]: I0122 16:28:58.987846 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/pull/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.060971 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/extract/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.067380 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rrhnc_ad0c11c4-596d-4cf6-bd06-b2a3d065f0d6/util/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.211702 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/util/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.436878 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/util/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.592062 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/pull/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.640477 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/pull/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.881592 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/pull/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.884548 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/extract/0.log" Jan 22 16:28:59 crc kubenswrapper[4825]: I0122 16:28:59.910819 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tqjqn_ddaef815-cdc9-496c-84b6-854d4d626f48/util/0.log" Jan 22 16:29:00 crc kubenswrapper[4825]: I0122 16:29:00.115136 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/extract-utilities/0.log" Jan 22 16:29:00 crc kubenswrapper[4825]: I0122 16:29:00.992487 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/extract-utilities/0.log" Jan 22 16:29:00 crc kubenswrapper[4825]: I0122 16:29:00.997160 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/extract-content/0.log" Jan 22 16:29:00 crc kubenswrapper[4825]: I0122 16:29:00.997160 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/extract-content/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.180171 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/extract-utilities/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.209030 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/extract-content/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.273250 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/extract-utilities/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.476391 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr5ws_2a5fee3b-6b13-47f7-aa99-8ac3068afb93/registry-server/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.536358 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/extract-content/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.551289 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/extract-content/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.554831 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/extract-utilities/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.727724 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/extract-content/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.729245 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/extract-utilities/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.785220 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/extract-utilities/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.810677 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfn59_c6f1ab07-8476-48f7-8969-fd7bdba2fa71/registry-server/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.969901 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/extract-content/0.log" Jan 22 16:29:01 crc kubenswrapper[4825]: I0122 16:29:01.994556 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/extract-utilities/0.log" Jan 22 16:29:02 crc kubenswrapper[4825]: I0122 16:29:02.025866 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/extract-content/0.log" Jan 22 16:29:02 crc kubenswrapper[4825]: I0122 16:29:02.159997 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/extract-utilities/0.log" Jan 22 16:29:02 crc kubenswrapper[4825]: I0122 16:29:02.199753 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/extract-content/0.log" Jan 22 16:29:02 crc kubenswrapper[4825]: I0122 16:29:02.228739 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkt56_1ff904fd-281e-4583-9b04-bd906890ec8d/registry-server/0.log" Jan 22 16:29:02 crc kubenswrapper[4825]: I0122 16:29:02.804235 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/extract-utilities/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.126921 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/extract-content/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.137535 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/extract-utilities/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.198158 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/extract-content/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.462886 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/extract-utilities/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.475897 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/extract-content/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.508950 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-k98tl_3c53d3cf-ed7c-4579-a577-9e19ffb5d58e/marketplace-operator/0.log" Jan 22 16:29:03 crc kubenswrapper[4825]: I0122 16:29:03.763161 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/extract-utilities/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.021568 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/extract-utilities/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.023434 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf2vh_57e9db63-ab8b-4fa5-98c3-46cfd5e87fc3/registry-server/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.030890 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/extract-content/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.041146 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/extract-content/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.295903 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/extract-utilities/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.344455 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/extract-content/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.391529 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/extract-utilities/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.443028 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-l7f7x_30bcd0c8-9381-4b99-a083-b014af82df43/registry-server/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.911120 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/extract-utilities/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.949070 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/extract-content/0.log" Jan 22 16:29:04 crc kubenswrapper[4825]: I0122 16:29:04.957119 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/extract-content/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.131414 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/extract-utilities/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.132347 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/extract-content/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.166212 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/extract-utilities/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.197787 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qh4sm_8613a8ac-d68f-4ce7-b17b-ab85266760b3/registry-server/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.431219 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/extract-utilities/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.431248 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/extract-content/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.460191 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/extract-content/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.669346 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/extract-content/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.673711 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/extract-utilities/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.757087 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/registry-server/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.767018 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7qk5_e06cceab-9530-4e72-b66b-5d8086ea4c51/extract-utilities/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.961384 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/extract-content/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.961473 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/extract-utilities/0.log" Jan 22 16:29:05 crc kubenswrapper[4825]: I0122 16:29:05.994232 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.199007 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/extract-utilities/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.236541 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.247534 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9dmgd_c18532f0-448c-4b68-a9b5-184026c8742e/registry-server/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.287997 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/extract-utilities/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.455545 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/extract-utilities/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.480831 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.505836 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.716010 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.745197 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/extract-utilities/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.759236 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/registry-server/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.774175 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j26fp_b98246b0-1146-407d-99ba-0a8a93d3af50/extract-utilities/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.970257 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.980878 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/extract-content/0.log" Jan 22 16:29:06 crc kubenswrapper[4825]: I0122 16:29:06.987495 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/extract-utilities/0.log" Jan 22 16:29:07 crc kubenswrapper[4825]: I0122 16:29:07.253234 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/extract-content/0.log" Jan 22 16:29:07 crc kubenswrapper[4825]: I0122 16:29:07.287931 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/extract-utilities/0.log" Jan 22 16:29:07 crc kubenswrapper[4825]: I0122 16:29:07.844124 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qg9m6_60dda316-e11c-4286-866e-52fa6e3db5f9/registry-server/0.log" Jan 22 16:29:08 crc kubenswrapper[4825]: I0122 16:29:08.517337 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:29:08 crc kubenswrapper[4825]: E0122 16:29:08.517866 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:29:22 crc kubenswrapper[4825]: I0122 16:29:22.517698 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:29:22 crc kubenswrapper[4825]: E0122 16:29:22.518548 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:29:23 crc kubenswrapper[4825]: I0122 16:29:23.862142 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff58c757-lfgq4_290bb62e-9a41-4a2a-886c-803ffa414dce/prometheus-operator-admission-webhook/0.log" Jan 22 16:29:23 crc kubenswrapper[4825]: I0122 16:29:23.914266 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ff58c757-6xffs_0eec7a70-3ecb-430c-b94d-94ad04cf5ee1/prometheus-operator-admission-webhook/0.log" Jan 22 16:29:23 crc kubenswrapper[4825]: I0122 16:29:23.917079 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x6k4r_1d4ea96c-02ed-4924-bdc0-0fa0a9932467/prometheus-operator/0.log" Jan 22 16:29:24 crc kubenswrapper[4825]: I0122 16:29:24.126150 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-k58sz_cd517bb5-d9d2-4e12-8a06-bb673cbb9dc7/operator/0.log" Jan 22 16:29:24 crc kubenswrapper[4825]: I0122 16:29:24.171192 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8wflx_2df2d5aa-6948-42a3-8ba0-7eedffb87020/perses-operator/0.log" Jan 22 16:29:34 crc kubenswrapper[4825]: I0122 16:29:34.517646 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:29:34 crc kubenswrapper[4825]: E0122 16:29:34.527453 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.591198 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v4kxm"] Jan 22 16:29:40 crc kubenswrapper[4825]: E0122 16:29:40.592487 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e5f66-282f-414e-abf4-9a8d6ffb108e" containerName="container-00" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.592507 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e5f66-282f-414e-abf4-9a8d6ffb108e" containerName="container-00" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.592776 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e5f66-282f-414e-abf4-9a8d6ffb108e" containerName="container-00" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.594437 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.634036 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4kxm"] Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.788720 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c21f3c-f842-4e82-b405-0225c5d9c350-catalog-content\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.788804 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c21f3c-f842-4e82-b405-0225c5d9c350-utilities\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.788830 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4mg\" (UniqueName: \"kubernetes.io/projected/03c21f3c-f842-4e82-b405-0225c5d9c350-kube-api-access-br4mg\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.891175 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c21f3c-f842-4e82-b405-0225c5d9c350-catalog-content\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.891286 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c21f3c-f842-4e82-b405-0225c5d9c350-utilities\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.891314 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4mg\" (UniqueName: \"kubernetes.io/projected/03c21f3c-f842-4e82-b405-0225c5d9c350-kube-api-access-br4mg\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.892560 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c21f3c-f842-4e82-b405-0225c5d9c350-utilities\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.892551 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c21f3c-f842-4e82-b405-0225c5d9c350-catalog-content\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.914019 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4mg\" (UniqueName: \"kubernetes.io/projected/03c21f3c-f842-4e82-b405-0225c5d9c350-kube-api-access-br4mg\") pod \"certified-operators-v4kxm\" (UID: \"03c21f3c-f842-4e82-b405-0225c5d9c350\") " pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.929620 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:29:40 crc kubenswrapper[4825]: I0122 16:29:40.951610 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5867677bb9-kgwmt_52f9b085-39e8-4a44-93c0-be3d751cb667/manager/0.log" Jan 22 16:29:41 crc kubenswrapper[4825]: I0122 16:29:41.134762 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5867677bb9-kgwmt_52f9b085-39e8-4a44-93c0-be3d751cb667/kube-rbac-proxy/0.log" Jan 22 16:29:41 crc kubenswrapper[4825]: I0122 16:29:41.760331 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4kxm"] Jan 22 16:29:42 crc kubenswrapper[4825]: I0122 16:29:42.525635 4825 generic.go:334] "Generic (PLEG): container finished" podID="03c21f3c-f842-4e82-b405-0225c5d9c350" containerID="66c34f397432cd41bcd03128d2cdaec134e54ac7e3f6e957ee417bd9d7e76c8e" exitCode=0 Jan 22 16:29:42 crc kubenswrapper[4825]: I0122 16:29:42.525680 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4kxm" event={"ID":"03c21f3c-f842-4e82-b405-0225c5d9c350","Type":"ContainerDied","Data":"66c34f397432cd41bcd03128d2cdaec134e54ac7e3f6e957ee417bd9d7e76c8e"} Jan 22 16:29:42 crc kubenswrapper[4825]: I0122 16:29:42.526008 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4kxm" event={"ID":"03c21f3c-f842-4e82-b405-0225c5d9c350","Type":"ContainerStarted","Data":"19e6306f1279c318cb0a0b7a24f6c8853c17d2bdbda7645f969cd87d6ee622be"} Jan 22 16:29:42 crc kubenswrapper[4825]: I0122 16:29:42.528091 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:29:48 crc kubenswrapper[4825]: I0122 16:29:48.518324 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:29:48 crc kubenswrapper[4825]: E0122 16:29:48.518910 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:29:52 crc kubenswrapper[4825]: I0122 16:29:52.703709 4825 generic.go:334] "Generic (PLEG): container finished" podID="03c21f3c-f842-4e82-b405-0225c5d9c350" containerID="0449643fea437e09af09506fe0efd11a2e1ea269904948e51964cfabb415696a" exitCode=0 Jan 22 16:29:52 crc kubenswrapper[4825]: I0122 16:29:52.703816 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4kxm" event={"ID":"03c21f3c-f842-4e82-b405-0225c5d9c350","Type":"ContainerDied","Data":"0449643fea437e09af09506fe0efd11a2e1ea269904948e51964cfabb415696a"} Jan 22 16:29:52 crc kubenswrapper[4825]: E0122 16:29:52.982562 4825 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.97:56304->38.102.83.97:34763: write tcp 38.102.83.97:56304->38.102.83.97:34763: write: connection reset by peer Jan 22 16:29:54 crc kubenswrapper[4825]: I0122 16:29:54.738921 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4kxm" event={"ID":"03c21f3c-f842-4e82-b405-0225c5d9c350","Type":"ContainerStarted","Data":"37f5c748ef4568ef165c932c8ce4b4f21f2e1d38b31bc71ac9dc5c28edc52feb"} Jan 22 16:29:54 crc kubenswrapper[4825]: I0122 16:29:54.773648 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v4kxm" podStartSLOduration=3.750661331 podStartE2EDuration="14.773605489s" podCreationTimestamp="2026-01-22 16:29:40 +0000 UTC" firstStartedPulling="2026-01-22 16:29:42.527583734 +0000 UTC m=+3929.289110654" lastFinishedPulling="2026-01-22 16:29:53.550527902 +0000 UTC m=+3940.312054812" observedRunningTime="2026-01-22 16:29:54.765769096 +0000 UTC m=+3941.527296006" watchObservedRunningTime="2026-01-22 16:29:54.773605489 +0000 UTC m=+3941.535132399" Jan 22 16:29:59 crc kubenswrapper[4825]: I0122 16:29:59.517544 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:29:59 crc kubenswrapper[4825]: E0122 16:29:59.518330 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.222610 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx"] Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.224450 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.227346 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.227793 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.246374 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d818f3-a08d-42f3-8b34-3f0506db3203-config-volume\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.246537 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5595\" (UniqueName: \"kubernetes.io/projected/46d818f3-a08d-42f3-8b34-3f0506db3203-kube-api-access-z5595\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.246613 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46d818f3-a08d-42f3-8b34-3f0506db3203-secret-volume\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.247158 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx"] Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.348396 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d818f3-a08d-42f3-8b34-3f0506db3203-config-volume\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.348481 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5595\" (UniqueName: \"kubernetes.io/projected/46d818f3-a08d-42f3-8b34-3f0506db3203-kube-api-access-z5595\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.348542 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46d818f3-a08d-42f3-8b34-3f0506db3203-secret-volume\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.350034 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d818f3-a08d-42f3-8b34-3f0506db3203-config-volume\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.358608 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46d818f3-a08d-42f3-8b34-3f0506db3203-secret-volume\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.370441 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5595\" (UniqueName: \"kubernetes.io/projected/46d818f3-a08d-42f3-8b34-3f0506db3203-kube-api-access-z5595\") pod \"collect-profiles-29484990-jppcx\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.589651 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.930370 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:30:00 crc kubenswrapper[4825]: I0122 16:30:00.930795 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:30:01 crc kubenswrapper[4825]: I0122 16:30:01.028370 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:30:01 crc kubenswrapper[4825]: I0122 16:30:01.767371 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx"] Jan 22 16:30:01 crc kubenswrapper[4825]: I0122 16:30:01.918249 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" event={"ID":"46d818f3-a08d-42f3-8b34-3f0506db3203","Type":"ContainerStarted","Data":"fa626cb49ca385d51801dedd694ff448b8fa84344151c0035bb5f21261024b95"} Jan 22 16:30:01 crc kubenswrapper[4825]: I0122 16:30:01.993081 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v4kxm" Jan 22 16:30:02 crc kubenswrapper[4825]: I0122 16:30:02.115670 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4kxm"] Jan 22 16:30:02 crc kubenswrapper[4825]: I0122 16:30:02.198787 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:30:02 crc kubenswrapper[4825]: I0122 16:30:02.199163 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lr5ws" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="registry-server" containerID="cri-o://9aa28e43cfd8d7ef58f14413e8fdae09b1da86a9f20f01c4f12f23f339924566" gracePeriod=2 Jan 22 16:30:02 crc kubenswrapper[4825]: I0122 16:30:02.984651 4825 generic.go:334] "Generic (PLEG): container finished" podID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerID="9aa28e43cfd8d7ef58f14413e8fdae09b1da86a9f20f01c4f12f23f339924566" exitCode=0 Jan 22 16:30:02 crc kubenswrapper[4825]: I0122 16:30:02.985021 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerDied","Data":"9aa28e43cfd8d7ef58f14413e8fdae09b1da86a9f20f01c4f12f23f339924566"} Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.002472 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" event={"ID":"46d818f3-a08d-42f3-8b34-3f0506db3203","Type":"ContainerStarted","Data":"44be6605dee9aff52b25a6580f8e331a8e5d56d9f28a7494f3cfc507058e61a2"} Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.871383 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.910484 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-catalog-content\") pod \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.910574 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz4xf\" (UniqueName: \"kubernetes.io/projected/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-kube-api-access-gz4xf\") pod \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.910596 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-utilities\") pod \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\" (UID: \"2a5fee3b-6b13-47f7-aa99-8ac3068afb93\") " Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.911259 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-utilities" (OuterVolumeSpecName: "utilities") pod "2a5fee3b-6b13-47f7-aa99-8ac3068afb93" (UID: "2a5fee3b-6b13-47f7-aa99-8ac3068afb93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:30:03 crc kubenswrapper[4825]: I0122 16:30:03.914103 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" podStartSLOduration=3.914087076 podStartE2EDuration="3.914087076s" podCreationTimestamp="2026-01-22 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:03.037214534 +0000 UTC m=+3949.798741444" watchObservedRunningTime="2026-01-22 16:30:03.914087076 +0000 UTC m=+3950.675613986" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.037913 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-kube-api-access-gz4xf" (OuterVolumeSpecName: "kube-api-access-gz4xf") pod "2a5fee3b-6b13-47f7-aa99-8ac3068afb93" (UID: "2a5fee3b-6b13-47f7-aa99-8ac3068afb93"). InnerVolumeSpecName "kube-api-access-gz4xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.042798 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz4xf\" (UniqueName: \"kubernetes.io/projected/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-kube-api-access-gz4xf\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.042821 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.079171 4825 generic.go:334] "Generic (PLEG): container finished" podID="46d818f3-a08d-42f3-8b34-3f0506db3203" containerID="44be6605dee9aff52b25a6580f8e331a8e5d56d9f28a7494f3cfc507058e61a2" exitCode=0 Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.079573 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" event={"ID":"46d818f3-a08d-42f3-8b34-3f0506db3203","Type":"ContainerDied","Data":"44be6605dee9aff52b25a6580f8e331a8e5d56d9f28a7494f3cfc507058e61a2"} Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.088214 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr5ws" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.089203 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr5ws" event={"ID":"2a5fee3b-6b13-47f7-aa99-8ac3068afb93","Type":"ContainerDied","Data":"5c77530ee42392af2fe1afccf222d3c5ed049f3304f2d00f9496c6391c05969b"} Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.089273 4825 scope.go:117] "RemoveContainer" containerID="9aa28e43cfd8d7ef58f14413e8fdae09b1da86a9f20f01c4f12f23f339924566" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.105948 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a5fee3b-6b13-47f7-aa99-8ac3068afb93" (UID: "2a5fee3b-6b13-47f7-aa99-8ac3068afb93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.145069 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5fee3b-6b13-47f7-aa99-8ac3068afb93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.161165 4825 scope.go:117] "RemoveContainer" containerID="c3817c6b494acc4e865e2559eabeef9896f0b8a87dd450a6c3368571bfedb3fa" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.192687 4825 scope.go:117] "RemoveContainer" containerID="069fdba9e8f149b9cdefd147fb0cffcbb3e5ade662800f827fd6b58c18257e59" Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.466204 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:30:04 crc kubenswrapper[4825]: I0122 16:30:04.482841 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lr5ws"] Jan 22 16:30:05 crc kubenswrapper[4825]: I0122 16:30:05.529912 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" path="/var/lib/kubelet/pods/2a5fee3b-6b13-47f7-aa99-8ac3068afb93/volumes" Jan 22 16:30:05 crc kubenswrapper[4825]: E0122 16:30:05.589358 4825 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.97:53228->38.102.83.97:34763: write tcp 38.102.83.97:53228->38.102.83.97:34763: write: broken pipe Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.233959 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" event={"ID":"46d818f3-a08d-42f3-8b34-3f0506db3203","Type":"ContainerDied","Data":"fa626cb49ca385d51801dedd694ff448b8fa84344151c0035bb5f21261024b95"} Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.234207 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa626cb49ca385d51801dedd694ff448b8fa84344151c0035bb5f21261024b95" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.285143 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.400741 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46d818f3-a08d-42f3-8b34-3f0506db3203-secret-volume\") pod \"46d818f3-a08d-42f3-8b34-3f0506db3203\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.401017 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d818f3-a08d-42f3-8b34-3f0506db3203-config-volume\") pod \"46d818f3-a08d-42f3-8b34-3f0506db3203\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.401124 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5595\" (UniqueName: \"kubernetes.io/projected/46d818f3-a08d-42f3-8b34-3f0506db3203-kube-api-access-z5595\") pod \"46d818f3-a08d-42f3-8b34-3f0506db3203\" (UID: \"46d818f3-a08d-42f3-8b34-3f0506db3203\") " Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.403398 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d818f3-a08d-42f3-8b34-3f0506db3203-config-volume" (OuterVolumeSpecName: "config-volume") pod "46d818f3-a08d-42f3-8b34-3f0506db3203" (UID: "46d818f3-a08d-42f3-8b34-3f0506db3203"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.407912 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d818f3-a08d-42f3-8b34-3f0506db3203-kube-api-access-z5595" (OuterVolumeSpecName: "kube-api-access-z5595") pod "46d818f3-a08d-42f3-8b34-3f0506db3203" (UID: "46d818f3-a08d-42f3-8b34-3f0506db3203"). InnerVolumeSpecName "kube-api-access-z5595". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.427127 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d818f3-a08d-42f3-8b34-3f0506db3203-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46d818f3-a08d-42f3-8b34-3f0506db3203" (UID: "46d818f3-a08d-42f3-8b34-3f0506db3203"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.503633 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46d818f3-a08d-42f3-8b34-3f0506db3203-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.503681 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46d818f3-a08d-42f3-8b34-3f0506db3203-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:06 crc kubenswrapper[4825]: I0122 16:30:06.503692 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5595\" (UniqueName: \"kubernetes.io/projected/46d818f3-a08d-42f3-8b34-3f0506db3203-kube-api-access-z5595\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:07 crc kubenswrapper[4825]: I0122 16:30:07.241879 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-jppcx" Jan 22 16:30:07 crc kubenswrapper[4825]: I0122 16:30:07.456498 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4"] Jan 22 16:30:07 crc kubenswrapper[4825]: I0122 16:30:07.474924 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484945-ptff4"] Jan 22 16:30:07 crc kubenswrapper[4825]: I0122 16:30:07.530214 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d01009-6c3f-4fc0-9fdb-834e14ad78a2" path="/var/lib/kubelet/pods/82d01009-6c3f-4fc0-9fdb-834e14ad78a2/volumes" Jan 22 16:30:14 crc kubenswrapper[4825]: I0122 16:30:14.518071 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:30:14 crc kubenswrapper[4825]: E0122 16:30:14.518898 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.382648 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7qq4m"] Jan 22 16:30:15 crc kubenswrapper[4825]: E0122 16:30:15.383468 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d818f3-a08d-42f3-8b34-3f0506db3203" containerName="collect-profiles" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.383568 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d818f3-a08d-42f3-8b34-3f0506db3203" containerName="collect-profiles" Jan 22 16:30:15 crc kubenswrapper[4825]: E0122 16:30:15.383657 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="extract-utilities" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.383718 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="extract-utilities" Jan 22 16:30:15 crc kubenswrapper[4825]: E0122 16:30:15.383778 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="registry-server" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.383828 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="registry-server" Jan 22 16:30:15 crc kubenswrapper[4825]: E0122 16:30:15.383887 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="extract-content" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.383942 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="extract-content" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.384233 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5fee3b-6b13-47f7-aa99-8ac3068afb93" containerName="registry-server" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.384306 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d818f3-a08d-42f3-8b34-3f0506db3203" containerName="collect-profiles" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.385890 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.400619 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qq4m"] Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.482006 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-catalog-content\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.482421 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-utilities\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.482916 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdbj\" (UniqueName: \"kubernetes.io/projected/24963b41-e2e0-4376-8363-3ee89d8d61fa-kube-api-access-9kdbj\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.584851 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-catalog-content\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.584916 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-utilities\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.585088 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdbj\" (UniqueName: \"kubernetes.io/projected/24963b41-e2e0-4376-8363-3ee89d8d61fa-kube-api-access-9kdbj\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.585492 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-catalog-content\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.585517 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-utilities\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.614922 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdbj\" (UniqueName: \"kubernetes.io/projected/24963b41-e2e0-4376-8363-3ee89d8d61fa-kube-api-access-9kdbj\") pod \"redhat-operators-7qq4m\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:15 crc kubenswrapper[4825]: I0122 16:30:15.706226 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:16 crc kubenswrapper[4825]: I0122 16:30:16.717129 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qq4m"] Jan 22 16:30:17 crc kubenswrapper[4825]: I0122 16:30:17.444915 4825 generic.go:334] "Generic (PLEG): container finished" podID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerID="5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5" exitCode=0 Jan 22 16:30:17 crc kubenswrapper[4825]: I0122 16:30:17.445167 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerDied","Data":"5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5"} Jan 22 16:30:17 crc kubenswrapper[4825]: I0122 16:30:17.445198 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerStarted","Data":"897f0ab723c14dc6e26931c9901e9c918d895f5c30ce88fbc3b69690aea5cc60"} Jan 22 16:30:19 crc kubenswrapper[4825]: I0122 16:30:19.576498 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerStarted","Data":"31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9"} Jan 22 16:30:21 crc kubenswrapper[4825]: I0122 16:30:21.595774 4825 generic.go:334] "Generic (PLEG): container finished" podID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerID="31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9" exitCode=0 Jan 22 16:30:21 crc kubenswrapper[4825]: I0122 16:30:21.595873 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerDied","Data":"31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9"} Jan 22 16:30:24 crc kubenswrapper[4825]: I0122 16:30:24.628549 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerStarted","Data":"c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8"} Jan 22 16:30:24 crc kubenswrapper[4825]: I0122 16:30:24.656905 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7qq4m" podStartSLOduration=4.25056867 podStartE2EDuration="9.656873271s" podCreationTimestamp="2026-01-22 16:30:15 +0000 UTC" firstStartedPulling="2026-01-22 16:30:17.447735593 +0000 UTC m=+3964.209262503" lastFinishedPulling="2026-01-22 16:30:22.854040194 +0000 UTC m=+3969.615567104" observedRunningTime="2026-01-22 16:30:24.64381112 +0000 UTC m=+3971.405338030" watchObservedRunningTime="2026-01-22 16:30:24.656873271 +0000 UTC m=+3971.418400181" Jan 22 16:30:25 crc kubenswrapper[4825]: I0122 16:30:25.708527 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:25 crc kubenswrapper[4825]: I0122 16:30:25.708595 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:26 crc kubenswrapper[4825]: I0122 16:30:26.517887 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:30:26 crc kubenswrapper[4825]: E0122 16:30:26.518800 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:30:26 crc kubenswrapper[4825]: I0122 16:30:26.768270 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7qq4m" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="registry-server" probeResult="failure" output=< Jan 22 16:30:26 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Jan 22 16:30:26 crc kubenswrapper[4825]: > Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.257099 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sr444"] Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.261920 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.269301 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sr444"] Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.334443 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-utilities\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.334613 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-catalog-content\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.334863 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwj2\" (UniqueName: \"kubernetes.io/projected/62e65c79-f242-406b-9df9-350308425f36-kube-api-access-7fwj2\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.436658 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwj2\" (UniqueName: \"kubernetes.io/projected/62e65c79-f242-406b-9df9-350308425f36-kube-api-access-7fwj2\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.436712 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-utilities\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.436790 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-catalog-content\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.437258 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-utilities\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.437319 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-catalog-content\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.457905 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwj2\" (UniqueName: \"kubernetes.io/projected/62e65c79-f242-406b-9df9-350308425f36-kube-api-access-7fwj2\") pod \"community-operators-sr444\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:34 crc kubenswrapper[4825]: I0122 16:30:34.583686 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:35 crc kubenswrapper[4825]: I0122 16:30:35.499179 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sr444"] Jan 22 16:30:35 crc kubenswrapper[4825]: I0122 16:30:35.768105 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:35 crc kubenswrapper[4825]: I0122 16:30:35.803170 4825 generic.go:334] "Generic (PLEG): container finished" podID="62e65c79-f242-406b-9df9-350308425f36" containerID="f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3" exitCode=0 Jan 22 16:30:35 crc kubenswrapper[4825]: I0122 16:30:35.803215 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerDied","Data":"f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3"} Jan 22 16:30:35 crc kubenswrapper[4825]: I0122 16:30:35.803249 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerStarted","Data":"4dbc4f50d0cc1cbc75302e5a98a608a0b0f822c782a1d0033f2c379bf945c605"} Jan 22 16:30:35 crc kubenswrapper[4825]: I0122 16:30:35.819403 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:36 crc kubenswrapper[4825]: I0122 16:30:36.650938 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qq4m"] Jan 22 16:30:36 crc kubenswrapper[4825]: I0122 16:30:36.819615 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7qq4m" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="registry-server" containerID="cri-o://c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8" gracePeriod=2 Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.732217 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.814417 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-catalog-content\") pod \"24963b41-e2e0-4376-8363-3ee89d8d61fa\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.814556 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-utilities\") pod \"24963b41-e2e0-4376-8363-3ee89d8d61fa\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.814657 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kdbj\" (UniqueName: \"kubernetes.io/projected/24963b41-e2e0-4376-8363-3ee89d8d61fa-kube-api-access-9kdbj\") pod \"24963b41-e2e0-4376-8363-3ee89d8d61fa\" (UID: \"24963b41-e2e0-4376-8363-3ee89d8d61fa\") " Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.817033 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-utilities" (OuterVolumeSpecName: "utilities") pod "24963b41-e2e0-4376-8363-3ee89d8d61fa" (UID: "24963b41-e2e0-4376-8363-3ee89d8d61fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.821750 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24963b41-e2e0-4376-8363-3ee89d8d61fa-kube-api-access-9kdbj" (OuterVolumeSpecName: "kube-api-access-9kdbj") pod "24963b41-e2e0-4376-8363-3ee89d8d61fa" (UID: "24963b41-e2e0-4376-8363-3ee89d8d61fa"). InnerVolumeSpecName "kube-api-access-9kdbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.838705 4825 generic.go:334] "Generic (PLEG): container finished" podID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerID="c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8" exitCode=0 Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.838761 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerDied","Data":"c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8"} Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.838789 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qq4m" event={"ID":"24963b41-e2e0-4376-8363-3ee89d8d61fa","Type":"ContainerDied","Data":"897f0ab723c14dc6e26931c9901e9c918d895f5c30ce88fbc3b69690aea5cc60"} Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.838805 4825 scope.go:117] "RemoveContainer" containerID="c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.838945 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qq4m" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.844875 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerStarted","Data":"7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294"} Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.898421 4825 scope.go:117] "RemoveContainer" containerID="31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.918425 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kdbj\" (UniqueName: \"kubernetes.io/projected/24963b41-e2e0-4376-8363-3ee89d8d61fa-kube-api-access-9kdbj\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.918454 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.919751 4825 scope.go:117] "RemoveContainer" containerID="5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.934270 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24963b41-e2e0-4376-8363-3ee89d8d61fa" (UID: "24963b41-e2e0-4376-8363-3ee89d8d61fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.973154 4825 scope.go:117] "RemoveContainer" containerID="c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8" Jan 22 16:30:37 crc kubenswrapper[4825]: E0122 16:30:37.973524 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8\": container with ID starting with c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8 not found: ID does not exist" containerID="c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.973561 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8"} err="failed to get container status \"c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8\": rpc error: code = NotFound desc = could not find container \"c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8\": container with ID starting with c8ff37b4224fd8e4785dc4f299f49a6888a304cfd0c6536184104773b84345e8 not found: ID does not exist" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.973595 4825 scope.go:117] "RemoveContainer" containerID="31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9" Jan 22 16:30:37 crc kubenswrapper[4825]: E0122 16:30:37.973961 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9\": container with ID starting with 31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9 not found: ID does not exist" containerID="31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.974005 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9"} err="failed to get container status \"31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9\": rpc error: code = NotFound desc = could not find container \"31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9\": container with ID starting with 31f3bdc225cc2bdf433b7847c3ef61a1f5ca2fae9db426d5095ef40617ac80a9 not found: ID does not exist" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.974022 4825 scope.go:117] "RemoveContainer" containerID="5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5" Jan 22 16:30:37 crc kubenswrapper[4825]: E0122 16:30:37.974248 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5\": container with ID starting with 5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5 not found: ID does not exist" containerID="5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5" Jan 22 16:30:37 crc kubenswrapper[4825]: I0122 16:30:37.974275 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5"} err="failed to get container status \"5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5\": rpc error: code = NotFound desc = could not find container \"5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5\": container with ID starting with 5067e0c3d340b94d495d359ae217ce85275051b2fcfc344a63b113be0b1343b5 not found: ID does not exist" Jan 22 16:30:38 crc kubenswrapper[4825]: I0122 16:30:38.020126 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24963b41-e2e0-4376-8363-3ee89d8d61fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:38 crc kubenswrapper[4825]: I0122 16:30:38.184629 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qq4m"] Jan 22 16:30:38 crc kubenswrapper[4825]: I0122 16:30:38.203367 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7qq4m"] Jan 22 16:30:38 crc kubenswrapper[4825]: I0122 16:30:38.610633 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:30:38 crc kubenswrapper[4825]: E0122 16:30:38.610945 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:30:38 crc kubenswrapper[4825]: I0122 16:30:38.858219 4825 generic.go:334] "Generic (PLEG): container finished" podID="62e65c79-f242-406b-9df9-350308425f36" containerID="7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294" exitCode=0 Jan 22 16:30:38 crc kubenswrapper[4825]: I0122 16:30:38.858312 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerDied","Data":"7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294"} Jan 22 16:30:39 crc kubenswrapper[4825]: I0122 16:30:39.531015 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" path="/var/lib/kubelet/pods/24963b41-e2e0-4376-8363-3ee89d8d61fa/volumes" Jan 22 16:30:39 crc kubenswrapper[4825]: I0122 16:30:39.872921 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerStarted","Data":"954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036"} Jan 22 16:30:39 crc kubenswrapper[4825]: I0122 16:30:39.907026 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sr444" podStartSLOduration=2.404335832 podStartE2EDuration="5.907000051s" podCreationTimestamp="2026-01-22 16:30:34 +0000 UTC" firstStartedPulling="2026-01-22 16:30:35.804712517 +0000 UTC m=+3982.566239427" lastFinishedPulling="2026-01-22 16:30:39.307376746 +0000 UTC m=+3986.068903646" observedRunningTime="2026-01-22 16:30:39.904668185 +0000 UTC m=+3986.666195095" watchObservedRunningTime="2026-01-22 16:30:39.907000051 +0000 UTC m=+3986.668526961" Jan 22 16:30:44 crc kubenswrapper[4825]: I0122 16:30:44.584663 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:44 crc kubenswrapper[4825]: I0122 16:30:44.585425 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:44 crc kubenswrapper[4825]: I0122 16:30:44.642302 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:44 crc kubenswrapper[4825]: I0122 16:30:44.994372 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:45 crc kubenswrapper[4825]: I0122 16:30:45.062656 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sr444"] Jan 22 16:30:46 crc kubenswrapper[4825]: I0122 16:30:46.948577 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sr444" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="registry-server" containerID="cri-o://954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036" gracePeriod=2 Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.820683 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.888769 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fwj2\" (UniqueName: \"kubernetes.io/projected/62e65c79-f242-406b-9df9-350308425f36-kube-api-access-7fwj2\") pod \"62e65c79-f242-406b-9df9-350308425f36\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.888872 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-utilities\") pod \"62e65c79-f242-406b-9df9-350308425f36\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.888955 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-catalog-content\") pod \"62e65c79-f242-406b-9df9-350308425f36\" (UID: \"62e65c79-f242-406b-9df9-350308425f36\") " Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.890482 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-utilities" (OuterVolumeSpecName: "utilities") pod "62e65c79-f242-406b-9df9-350308425f36" (UID: "62e65c79-f242-406b-9df9-350308425f36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.896299 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e65c79-f242-406b-9df9-350308425f36-kube-api-access-7fwj2" (OuterVolumeSpecName: "kube-api-access-7fwj2") pod "62e65c79-f242-406b-9df9-350308425f36" (UID: "62e65c79-f242-406b-9df9-350308425f36"). InnerVolumeSpecName "kube-api-access-7fwj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.965818 4825 generic.go:334] "Generic (PLEG): container finished" podID="62e65c79-f242-406b-9df9-350308425f36" containerID="954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036" exitCode=0 Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.966173 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerDied","Data":"954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036"} Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.967087 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sr444" event={"ID":"62e65c79-f242-406b-9df9-350308425f36","Type":"ContainerDied","Data":"4dbc4f50d0cc1cbc75302e5a98a608a0b0f822c782a1d0033f2c379bf945c605"} Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.966252 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sr444" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.967181 4825 scope.go:117] "RemoveContainer" containerID="954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.976226 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62e65c79-f242-406b-9df9-350308425f36" (UID: "62e65c79-f242-406b-9df9-350308425f36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.987603 4825 scope.go:117] "RemoveContainer" containerID="7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.992374 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fwj2\" (UniqueName: \"kubernetes.io/projected/62e65c79-f242-406b-9df9-350308425f36-kube-api-access-7fwj2\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.992419 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:47 crc kubenswrapper[4825]: I0122 16:30:47.992434 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e65c79-f242-406b-9df9-350308425f36-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.020914 4825 scope.go:117] "RemoveContainer" containerID="f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.087195 4825 scope.go:117] "RemoveContainer" containerID="954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036" Jan 22 16:30:48 crc kubenswrapper[4825]: E0122 16:30:48.088866 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036\": container with ID starting with 954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036 not found: ID does not exist" containerID="954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.088915 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036"} err="failed to get container status \"954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036\": rpc error: code = NotFound desc = could not find container \"954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036\": container with ID starting with 954fb03d6c89ab35572773e42e1d5b93a61c2c79263d4214cc9d9ec9fe7bf036 not found: ID does not exist" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.088945 4825 scope.go:117] "RemoveContainer" containerID="7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294" Jan 22 16:30:48 crc kubenswrapper[4825]: E0122 16:30:48.089393 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294\": container with ID starting with 7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294 not found: ID does not exist" containerID="7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.089438 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294"} err="failed to get container status \"7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294\": rpc error: code = NotFound desc = could not find container \"7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294\": container with ID starting with 7569a5fcda253896e9e58a756a9a8b7d7efc3d791d2b826fbb3d01e197fab294 not found: ID does not exist" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.089469 4825 scope.go:117] "RemoveContainer" containerID="f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3" Jan 22 16:30:48 crc kubenswrapper[4825]: E0122 16:30:48.089937 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3\": container with ID starting with f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3 not found: ID does not exist" containerID="f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.090001 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3"} err="failed to get container status \"f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3\": rpc error: code = NotFound desc = could not find container \"f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3\": container with ID starting with f19f47cc017c72839be2674ca95b08dbe3b73e471377e6098dc673d2595770f3 not found: ID does not exist" Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.317204 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sr444"] Jan 22 16:30:48 crc kubenswrapper[4825]: I0122 16:30:48.333739 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sr444"] Jan 22 16:30:49 crc kubenswrapper[4825]: I0122 16:30:49.528023 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e65c79-f242-406b-9df9-350308425f36" path="/var/lib/kubelet/pods/62e65c79-f242-406b-9df9-350308425f36/volumes" Jan 22 16:30:51 crc kubenswrapper[4825]: I0122 16:30:51.519818 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:30:51 crc kubenswrapper[4825]: E0122 16:30:51.521898 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:31:01 crc kubenswrapper[4825]: I0122 16:31:01.501725 4825 scope.go:117] "RemoveContainer" containerID="44e408cf7ec021903f1402bc8c31cfbd12032f24c6d91eab9e76f91a6305478a" Jan 22 16:31:01 crc kubenswrapper[4825]: I0122 16:31:01.607555 4825 scope.go:117] "RemoveContainer" containerID="d8aa6a3b76f475b36f0e2acb3b89c90d09133a11d712aa3e74aeff02383bdb6a" Jan 22 16:31:02 crc kubenswrapper[4825]: I0122 16:31:02.517319 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:31:02 crc kubenswrapper[4825]: E0122 16:31:02.517598 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:31:15 crc kubenswrapper[4825]: I0122 16:31:15.523969 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:31:15 crc kubenswrapper[4825]: E0122 16:31:15.525160 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:31:29 crc kubenswrapper[4825]: I0122 16:31:29.518161 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:31:29 crc kubenswrapper[4825]: E0122 16:31:29.519468 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:31:40 crc kubenswrapper[4825]: I0122 16:31:40.517146 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:31:40 crc kubenswrapper[4825]: E0122 16:31:40.518054 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:31:54 crc kubenswrapper[4825]: I0122 16:31:54.517489 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:31:54 crc kubenswrapper[4825]: E0122 16:31:54.518323 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k9wpt_openshift-machine-config-operator(1d6015ae-d193-4854-9861-dc4384510fdb)\"" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" Jan 22 16:32:05 crc kubenswrapper[4825]: I0122 16:32:05.650377 4825 generic.go:334] "Generic (PLEG): container finished" podID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerID="52a55df71bca772738d87e3a91c2b1d3d4b8eb04ed6f4b29e79e30ca5e5de43b" exitCode=0 Jan 22 16:32:05 crc kubenswrapper[4825]: I0122 16:32:05.650511 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" event={"ID":"4a21cb6a-1ef4-4510-8915-ed2e8024268f","Type":"ContainerDied","Data":"52a55df71bca772738d87e3a91c2b1d3d4b8eb04ed6f4b29e79e30ca5e5de43b"} Jan 22 16:32:05 crc kubenswrapper[4825]: I0122 16:32:05.651663 4825 scope.go:117] "RemoveContainer" containerID="52a55df71bca772738d87e3a91c2b1d3d4b8eb04ed6f4b29e79e30ca5e5de43b" Jan 22 16:32:06 crc kubenswrapper[4825]: I0122 16:32:06.070870 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qxnm4_must-gather-cbc8c_4a21cb6a-1ef4-4510-8915-ed2e8024268f/gather/0.log" Jan 22 16:32:07 crc kubenswrapper[4825]: I0122 16:32:07.517144 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:32:08 crc kubenswrapper[4825]: I0122 16:32:08.692069 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"5bd74f5188e1aab9e6a328312f0eab6377eeffb21cb3aa5a04d6f4c510b03c77"} Jan 22 16:32:13 crc kubenswrapper[4825]: I0122 16:32:13.977564 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qxnm4/must-gather-cbc8c"] Jan 22 16:32:13 crc kubenswrapper[4825]: I0122 16:32:13.978390 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="copy" containerID="cri-o://dae42b551adefa5d6962d3c7d606c9274fbef331be446ec1699fdc9cbf24928c" gracePeriod=2 Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.009332 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qxnm4/must-gather-cbc8c"] Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.828806 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qxnm4_must-gather-cbc8c_4a21cb6a-1ef4-4510-8915-ed2e8024268f/copy/0.log" Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.832581 4825 generic.go:334] "Generic (PLEG): container finished" podID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerID="dae42b551adefa5d6962d3c7d606c9274fbef331be446ec1699fdc9cbf24928c" exitCode=143 Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.832625 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2bf163328e1557a437ea5c2540eff4618dc417d7b0142cfa83bace94b5092b7" Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.841746 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qxnm4_must-gather-cbc8c_4a21cb6a-1ef4-4510-8915-ed2e8024268f/copy/0.log" Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.842238 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.937695 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4z75\" (UniqueName: \"kubernetes.io/projected/4a21cb6a-1ef4-4510-8915-ed2e8024268f-kube-api-access-x4z75\") pod \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.937942 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a21cb6a-1ef4-4510-8915-ed2e8024268f-must-gather-output\") pod \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\" (UID: \"4a21cb6a-1ef4-4510-8915-ed2e8024268f\") " Jan 22 16:32:14 crc kubenswrapper[4825]: I0122 16:32:14.968198 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a21cb6a-1ef4-4510-8915-ed2e8024268f-kube-api-access-x4z75" (OuterVolumeSpecName: "kube-api-access-x4z75") pod "4a21cb6a-1ef4-4510-8915-ed2e8024268f" (UID: "4a21cb6a-1ef4-4510-8915-ed2e8024268f"). InnerVolumeSpecName "kube-api-access-x4z75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:15 crc kubenswrapper[4825]: I0122 16:32:15.040660 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4z75\" (UniqueName: \"kubernetes.io/projected/4a21cb6a-1ef4-4510-8915-ed2e8024268f-kube-api-access-x4z75\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:15 crc kubenswrapper[4825]: I0122 16:32:15.219546 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a21cb6a-1ef4-4510-8915-ed2e8024268f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4a21cb6a-1ef4-4510-8915-ed2e8024268f" (UID: "4a21cb6a-1ef4-4510-8915-ed2e8024268f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:15 crc kubenswrapper[4825]: I0122 16:32:15.244525 4825 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a21cb6a-1ef4-4510-8915-ed2e8024268f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:15 crc kubenswrapper[4825]: I0122 16:32:15.530489 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" path="/var/lib/kubelet/pods/4a21cb6a-1ef4-4510-8915-ed2e8024268f/volumes" Jan 22 16:32:15 crc kubenswrapper[4825]: I0122 16:32:15.842701 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qxnm4/must-gather-cbc8c" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.682529 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d7jdj"] Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683524 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="gather" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683552 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="gather" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683576 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="extract-utilities" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683584 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="extract-utilities" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683608 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="extract-content" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683617 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="extract-content" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683631 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="registry-server" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683640 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="registry-server" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683653 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="copy" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683660 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="copy" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683699 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="registry-server" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683707 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="registry-server" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683719 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="extract-utilities" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683726 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="extract-utilities" Jan 22 16:32:40 crc kubenswrapper[4825]: E0122 16:32:40.683753 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="extract-content" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.683761 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="extract-content" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.684045 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e65c79-f242-406b-9df9-350308425f36" containerName="registry-server" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.684067 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="copy" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.684095 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a21cb6a-1ef4-4510-8915-ed2e8024268f" containerName="gather" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.684113 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="24963b41-e2e0-4376-8363-3ee89d8d61fa" containerName="registry-server" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.686939 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.702505 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7jdj"] Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.826745 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-utilities\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.826857 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-catalog-content\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.826930 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkh2b\" (UniqueName: \"kubernetes.io/projected/03ba9351-074e-484b-bbba-426d90d5b32b-kube-api-access-lkh2b\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.930680 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkh2b\" (UniqueName: \"kubernetes.io/projected/03ba9351-074e-484b-bbba-426d90d5b32b-kube-api-access-lkh2b\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.931489 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-utilities\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.931761 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-catalog-content\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.932161 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-utilities\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.963053 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-catalog-content\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:40 crc kubenswrapper[4825]: I0122 16:32:40.982082 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkh2b\" (UniqueName: \"kubernetes.io/projected/03ba9351-074e-484b-bbba-426d90d5b32b-kube-api-access-lkh2b\") pod \"redhat-marketplace-d7jdj\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:41 crc kubenswrapper[4825]: I0122 16:32:41.010769 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:41 crc kubenswrapper[4825]: I0122 16:32:41.536070 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7jdj"] Jan 22 16:32:42 crc kubenswrapper[4825]: I0122 16:32:42.186847 4825 generic.go:334] "Generic (PLEG): container finished" podID="03ba9351-074e-484b-bbba-426d90d5b32b" containerID="5d41fa3ec857ec18675344f4378ab78a69ed8f572afd5596816f1c3b67fa180b" exitCode=0 Jan 22 16:32:42 crc kubenswrapper[4825]: I0122 16:32:42.186991 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7jdj" event={"ID":"03ba9351-074e-484b-bbba-426d90d5b32b","Type":"ContainerDied","Data":"5d41fa3ec857ec18675344f4378ab78a69ed8f572afd5596816f1c3b67fa180b"} Jan 22 16:32:42 crc kubenswrapper[4825]: I0122 16:32:42.187157 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7jdj" event={"ID":"03ba9351-074e-484b-bbba-426d90d5b32b","Type":"ContainerStarted","Data":"436dd84131f45cbed956a34f187511f47d37cac3b99f4ba2d0fcd761914262fe"} Jan 22 16:32:44 crc kubenswrapper[4825]: I0122 16:32:44.211214 4825 generic.go:334] "Generic (PLEG): container finished" podID="03ba9351-074e-484b-bbba-426d90d5b32b" containerID="b7af00e98b61c43eadaa978afc91821b5a4a23bbc68beddf8b5c8744dc02d3d9" exitCode=0 Jan 22 16:32:44 crc kubenswrapper[4825]: I0122 16:32:44.211748 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7jdj" event={"ID":"03ba9351-074e-484b-bbba-426d90d5b32b","Type":"ContainerDied","Data":"b7af00e98b61c43eadaa978afc91821b5a4a23bbc68beddf8b5c8744dc02d3d9"} Jan 22 16:32:46 crc kubenswrapper[4825]: I0122 16:32:46.245499 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7jdj" event={"ID":"03ba9351-074e-484b-bbba-426d90d5b32b","Type":"ContainerStarted","Data":"8cc2fd2df0cddf7f95df7dc05d518c9d8238aba0c2f939aa1759aab804e169a6"} Jan 22 16:32:46 crc kubenswrapper[4825]: I0122 16:32:46.274468 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d7jdj" podStartSLOduration=2.91619348 podStartE2EDuration="6.274439554s" podCreationTimestamp="2026-01-22 16:32:40 +0000 UTC" firstStartedPulling="2026-01-22 16:32:42.188713755 +0000 UTC m=+4108.950240665" lastFinishedPulling="2026-01-22 16:32:45.546959829 +0000 UTC m=+4112.308486739" observedRunningTime="2026-01-22 16:32:46.269623458 +0000 UTC m=+4113.031150368" watchObservedRunningTime="2026-01-22 16:32:46.274439554 +0000 UTC m=+4113.035966464" Jan 22 16:32:51 crc kubenswrapper[4825]: I0122 16:32:51.011588 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:51 crc kubenswrapper[4825]: I0122 16:32:51.012250 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:51 crc kubenswrapper[4825]: I0122 16:32:51.066164 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:51 crc kubenswrapper[4825]: I0122 16:32:51.587235 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:51 crc kubenswrapper[4825]: I0122 16:32:51.651754 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7jdj"] Jan 22 16:32:53 crc kubenswrapper[4825]: I0122 16:32:53.538835 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d7jdj" podUID="03ba9351-074e-484b-bbba-426d90d5b32b" containerName="registry-server" containerID="cri-o://8cc2fd2df0cddf7f95df7dc05d518c9d8238aba0c2f939aa1759aab804e169a6" gracePeriod=2 Jan 22 16:32:54 crc kubenswrapper[4825]: I0122 16:32:54.575274 4825 generic.go:334] "Generic (PLEG): container finished" podID="03ba9351-074e-484b-bbba-426d90d5b32b" containerID="8cc2fd2df0cddf7f95df7dc05d518c9d8238aba0c2f939aa1759aab804e169a6" exitCode=0 Jan 22 16:32:54 crc kubenswrapper[4825]: I0122 16:32:54.575641 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7jdj" event={"ID":"03ba9351-074e-484b-bbba-426d90d5b32b","Type":"ContainerDied","Data":"8cc2fd2df0cddf7f95df7dc05d518c9d8238aba0c2f939aa1759aab804e169a6"} Jan 22 16:32:54 crc kubenswrapper[4825]: I0122 16:32:54.981584 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.175959 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-catalog-content\") pod \"03ba9351-074e-484b-bbba-426d90d5b32b\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.176408 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkh2b\" (UniqueName: \"kubernetes.io/projected/03ba9351-074e-484b-bbba-426d90d5b32b-kube-api-access-lkh2b\") pod \"03ba9351-074e-484b-bbba-426d90d5b32b\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.176564 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-utilities\") pod \"03ba9351-074e-484b-bbba-426d90d5b32b\" (UID: \"03ba9351-074e-484b-bbba-426d90d5b32b\") " Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.178084 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-utilities" (OuterVolumeSpecName: "utilities") pod "03ba9351-074e-484b-bbba-426d90d5b32b" (UID: "03ba9351-074e-484b-bbba-426d90d5b32b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.213262 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ba9351-074e-484b-bbba-426d90d5b32b-kube-api-access-lkh2b" (OuterVolumeSpecName: "kube-api-access-lkh2b") pod "03ba9351-074e-484b-bbba-426d90d5b32b" (UID: "03ba9351-074e-484b-bbba-426d90d5b32b"). InnerVolumeSpecName "kube-api-access-lkh2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.230960 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03ba9351-074e-484b-bbba-426d90d5b32b" (UID: "03ba9351-074e-484b-bbba-426d90d5b32b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.280223 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.280262 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkh2b\" (UniqueName: \"kubernetes.io/projected/03ba9351-074e-484b-bbba-426d90d5b32b-kube-api-access-lkh2b\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.280277 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ba9351-074e-484b-bbba-426d90d5b32b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.590954 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7jdj" event={"ID":"03ba9351-074e-484b-bbba-426d90d5b32b","Type":"ContainerDied","Data":"436dd84131f45cbed956a34f187511f47d37cac3b99f4ba2d0fcd761914262fe"} Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.591090 4825 scope.go:117] "RemoveContainer" containerID="8cc2fd2df0cddf7f95df7dc05d518c9d8238aba0c2f939aa1759aab804e169a6" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.591293 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7jdj" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.636046 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7jdj"] Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.654283 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7jdj"] Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.655929 4825 scope.go:117] "RemoveContainer" containerID="b7af00e98b61c43eadaa978afc91821b5a4a23bbc68beddf8b5c8744dc02d3d9" Jan 22 16:32:55 crc kubenswrapper[4825]: I0122 16:32:55.682414 4825 scope.go:117] "RemoveContainer" containerID="5d41fa3ec857ec18675344f4378ab78a69ed8f572afd5596816f1c3b67fa180b" Jan 22 16:32:57 crc kubenswrapper[4825]: I0122 16:32:57.542847 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ba9351-074e-484b-bbba-426d90d5b32b" path="/var/lib/kubelet/pods/03ba9351-074e-484b-bbba-426d90d5b32b/volumes" Jan 22 16:33:01 crc kubenswrapper[4825]: I0122 16:33:01.856427 4825 scope.go:117] "RemoveContainer" containerID="dae42b551adefa5d6962d3c7d606c9274fbef331be446ec1699fdc9cbf24928c" Jan 22 16:33:01 crc kubenswrapper[4825]: I0122 16:33:01.886744 4825 scope.go:117] "RemoveContainer" containerID="52a55df71bca772738d87e3a91c2b1d3d4b8eb04ed6f4b29e79e30ca5e5de43b" Jan 22 16:34:29 crc kubenswrapper[4825]: I0122 16:34:29.256305 4825 trace.go:236] Trace[278119160]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (22-Jan-2026 16:34:21.195) (total time: 8060ms): Jan 22 16:34:29 crc kubenswrapper[4825]: Trace[278119160]: [8.060928716s] [8.060928716s] END Jan 22 16:34:29 crc kubenswrapper[4825]: I0122 16:34:29.278209 4825 trace.go:236] Trace[1757881226]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (22-Jan-2026 16:34:26.818) (total time: 2459ms): Jan 22 16:34:29 crc kubenswrapper[4825]: Trace[1757881226]: [2.459208271s] [2.459208271s] END Jan 22 16:34:29 crc kubenswrapper[4825]: I0122 16:34:29.422794 4825 trace.go:236] Trace[1915724562]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (22-Jan-2026 16:34:22.433) (total time: 6989ms): Jan 22 16:34:29 crc kubenswrapper[4825]: Trace[1915724562]: [6.989552222s] [6.989552222s] END Jan 22 16:34:35 crc kubenswrapper[4825]: I0122 16:34:35.542104 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:34:35 crc kubenswrapper[4825]: I0122 16:34:35.542736 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:35:05 crc kubenswrapper[4825]: I0122 16:35:05.542081 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:35:05 crc kubenswrapper[4825]: I0122 16:35:05.542607 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:35:35 crc kubenswrapper[4825]: I0122 16:35:35.541650 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:35:35 crc kubenswrapper[4825]: I0122 16:35:35.543834 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:35:35 crc kubenswrapper[4825]: I0122 16:35:35.544033 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" Jan 22 16:35:35 crc kubenswrapper[4825]: I0122 16:35:35.545199 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bd74f5188e1aab9e6a328312f0eab6377eeffb21cb3aa5a04d6f4c510b03c77"} pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:35:35 crc kubenswrapper[4825]: I0122 16:35:35.545415 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" containerID="cri-o://5bd74f5188e1aab9e6a328312f0eab6377eeffb21cb3aa5a04d6f4c510b03c77" gracePeriod=600 Jan 22 16:35:36 crc kubenswrapper[4825]: I0122 16:35:36.091219 4825 generic.go:334] "Generic (PLEG): container finished" podID="1d6015ae-d193-4854-9861-dc4384510fdb" containerID="5bd74f5188e1aab9e6a328312f0eab6377eeffb21cb3aa5a04d6f4c510b03c77" exitCode=0 Jan 22 16:35:36 crc kubenswrapper[4825]: I0122 16:35:36.091322 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerDied","Data":"5bd74f5188e1aab9e6a328312f0eab6377eeffb21cb3aa5a04d6f4c510b03c77"} Jan 22 16:35:36 crc kubenswrapper[4825]: I0122 16:35:36.091764 4825 scope.go:117] "RemoveContainer" containerID="badaf935c68a844552f2b140be3a6edcaca0df8ade3ea7affc8c8493e35a04ea" Jan 22 16:35:37 crc kubenswrapper[4825]: I0122 16:35:37.163675 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" event={"ID":"1d6015ae-d193-4854-9861-dc4384510fdb","Type":"ContainerStarted","Data":"d175d41a3f2c56fd078a54b4a7372f1aa6954799c3a7c144b237d3904cf69f36"} Jan 22 16:38:05 crc kubenswrapper[4825]: I0122 16:38:05.541796 4825 patch_prober.go:28] interesting pod/machine-config-daemon-k9wpt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:38:05 crc kubenswrapper[4825]: I0122 16:38:05.542447 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k9wpt" podUID="1d6015ae-d193-4854-9861-dc4384510fdb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134451177024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134451200017355 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134440101016476 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134440101015446 5ustar corecore